0s autopkgtest [23:09:28]: starting date and time: 2024-05-13 23:09:28+0000 0s autopkgtest [23:09:28]: git checkout: 699e7f9f ssh-setup/nova: explicitely set 'fqdn' in cloud-init 0s autopkgtest [23:09:28]: host juju-7f2275-prod-proposed-migration-environment-3; command line: /home/ubuntu/autopkgtest/runner/autopkgtest --output-dir /tmp/autopkgtest-work.irp85nq7/out --timeout-copy=6000 -a i386 --setup-commands /home/ubuntu/autopkgtest-cloud/worker-config-production/setup-canonical.sh --apt-pocket=proposed=src:traitlets --apt-upgrade jupyter-notebook --timeout-short=300 --timeout-copy=20000 --timeout-build=20000 --env=ADT_TEST_TRIGGERS=traitlets/5.14.3-1 -- ssh -s /home/ubuntu/autopkgtest/ssh-setup/nova -- --flavor autopkgtest --security-groups autopkgtest-juju-7f2275-prod-proposed-migration-environment-3@lcy02-97.secgroup --name adt-oracular-i386-jupyter-notebook-20240513-230927-juju-7f2275-prod-proposed-migration-environment-3-94b9549f-2e0a-478a-a6e0-92e38f878270 --image adt/ubuntu-oracular-amd64-server --keyname testbed-juju-7f2275-prod-proposed-migration-environment-3 --net-id=net_prod-proposed-migration -e TERM=linux -e ''"'"'http_proxy=http://squid.internal:3128'"'"'' -e ''"'"'https_proxy=http://squid.internal:3128'"'"'' -e ''"'"'no_proxy=127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,ports.ubuntu.com,security.ubuntu.com,ddebs.ubuntu.com,changelogs.ubuntu.com,keyserver.ubuntu.com,launchpadlibrarian.net,launchpadcontent.net,launchpad.net,10.24.0.0/24,keystone.ps5.canonical.com,objectstorage.prodstack5.canonical.com'"'"'' --mirror=http://ftpmaster.internal/ubuntu/ 575s autopkgtest [23:19:03]: testbed dpkg architecture: amd64 575s autopkgtest [23:19:03]: testbed apt version: 2.7.14build2 575s autopkgtest [23:19:03]: test architecture: i386 575s autopkgtest [23:19:03]: @@@@@@@@@@@@@@@@@@@@ test bed setup 575s Get:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease [73.9 kB] 575s Get:2 http://ftpmaster.internal/ubuntu oracular-proposed/universe Sources [1145 kB] 575s Get:3 http://ftpmaster.internal/ubuntu oracular-proposed/main Sources [128 kB] 575s Get:4 http://ftpmaster.internal/ubuntu oracular-proposed/restricted Sources [1964 B] 575s Get:5 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse Sources [17.6 kB] 575s Get:6 http://ftpmaster.internal/ubuntu oracular-proposed/main amd64 Packages [215 kB] 575s Get:7 http://ftpmaster.internal/ubuntu oracular-proposed/main i386 Packages [171 kB] 575s Get:8 http://ftpmaster.internal/ubuntu oracular-proposed/restricted amd64 Packages [7700 B] 575s Get:9 http://ftpmaster.internal/ubuntu oracular-proposed/universe amd64 Packages [1033 kB] 575s Get:10 http://ftpmaster.internal/ubuntu oracular-proposed/universe i386 Packages [523 kB] 575s Get:11 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse amd64 Packages [53.1 kB] 575s Get:12 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse i386 Packages [19.0 kB] 576s Fetched 3388 kB in 0s (6789 kB/s) 576s Reading package lists... 578s Reading package lists... 578s Building dependency tree... 578s Reading state information... 579s Calculating upgrade... 579s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 579s Reading package lists... 579s Building dependency tree... 579s Reading state information... 579s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 579s Hit:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease 579s Hit:2 http://ftpmaster.internal/ubuntu oracular InRelease 579s Hit:3 http://ftpmaster.internal/ubuntu oracular-updates InRelease 579s Hit:4 http://ftpmaster.internal/ubuntu oracular-security InRelease 581s Reading package lists... 581s Reading package lists... 581s Building dependency tree... 581s Reading state information... 581s Calculating upgrade... 581s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 581s Reading package lists... 582s Building dependency tree... 582s Reading state information... 582s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 583s autopkgtest [23:19:11]: testbed running kernel: Linux 6.8.0-31-generic #31-Ubuntu SMP PREEMPT_DYNAMIC Sat Apr 20 00:40:06 UTC 2024 583s autopkgtest [23:19:11]: @@@@@@@@@@@@@@@@@@@@ apt-source jupyter-notebook 584s Get:1 http://ftpmaster.internal/ubuntu oracular/universe jupyter-notebook 6.4.12-2.2ubuntu1 (dsc) [3886 B] 584s Get:2 http://ftpmaster.internal/ubuntu oracular/universe jupyter-notebook 6.4.12-2.2ubuntu1 (tar) [8501 kB] 584s Get:3 http://ftpmaster.internal/ubuntu oracular/universe jupyter-notebook 6.4.12-2.2ubuntu1 (diff) [49.6 kB] 584s gpgv: Signature made Thu Feb 15 18:11:52 2024 UTC 584s gpgv: using RSA key D09F8A854F1055BCFC482C4B23566B906047AFC8 584s gpgv: Can't check signature: No public key 584s dpkg-source: warning: cannot verify inline signature for ./jupyter-notebook_6.4.12-2.2ubuntu1.dsc: no acceptable signature found 585s autopkgtest [23:19:13]: testing package jupyter-notebook version 6.4.12-2.2ubuntu1 585s autopkgtest [23:19:13]: build not needed 587s autopkgtest [23:19:15]: test pytest: preparing testbed 589s Note, using file '/tmp/autopkgtest.FMSSaJ/1-autopkgtest-satdep.dsc' to get the build dependencies 589s Reading package lists... 590s Building dependency tree... 590s Reading state information... 590s Starting pkgProblemResolver with broken count: 0 590s Starting 2 pkgProblemResolver with broken count: 0 590s Done 591s The following NEW packages will be installed: 591s build-essential cpp cpp-13 cpp-13-x86-64-linux-gnu cpp-x86-64-linux-gnu 591s fonts-font-awesome fonts-glyphicons-halflings fonts-lato fonts-mathjax g++ 591s g++-13 g++-13-x86-64-linux-gnu g++-x86-64-linux-gnu gcc gcc-13 gcc-13-base 591s gcc-13-x86-64-linux-gnu gcc-x86-64-linux-gnu gdb jupyter-core 591s jupyter-notebook libasan8 libatomic1 libbabeltrace1 libcc1-0 591s libdebuginfod-common libdebuginfod1t64 libgcc-13-dev libgomp1 libhwasan0 591s libipt2 libisl23 libitm1 libjs-backbone libjs-bootstrap libjs-bootstrap-tour 591s libjs-codemirror libjs-es6-promise libjs-jed libjs-jquery 591s libjs-jquery-typeahead libjs-jquery-ui libjs-marked libjs-mathjax 591s libjs-moment libjs-requirejs libjs-requirejs-text libjs-sphinxdoc 591s libjs-text-encoding libjs-underscore libjs-xterm liblsan0 libmpc3 591s libnorm1t64 libpgm-5.3-0t64 libpython3.12t64 libquadmath0 libsodium23 591s libsource-highlight-common libsource-highlight4t64 libstdc++-13-dev libtsan2 591s libubsan1 libxslt1.1 libzmq5 node-jed python-notebook-doc 591s python-tinycss2-common python3-argon2 python3-asttokens python3-bleach 591s python3-bs4 python3-bytecode python3-comm python3-coverage python3-dateutil 591s python3-debugpy python3-decorator python3-defusedxml python3-entrypoints 591s python3-executing python3-fastjsonschema python3-html5lib python3-iniconfig 591s python3-ipykernel python3-ipython python3-ipython-genutils python3-jedi 591s python3-jupyter-client python3-jupyter-core python3-jupyterlab-pygments 591s python3-lxml python3-lxml-html-clean python3-matplotlib-inline 591s python3-nbclient python3-nbconvert python3-nbformat python3-nest-asyncio 591s python3-notebook python3-packaging python3-pandocfilters python3-parso 591s python3-pexpect python3-platformdirs python3-pluggy 591s python3-prometheus-client python3-prompt-toolkit python3-psutil 591s python3-ptyprocess python3-pure-eval python3-py python3-pydevd 591s python3-pytest python3-requests-unixsocket python3-send2trash 591s python3-soupsieve python3-stack-data python3-terminado python3-tinycss2 591s python3-tornado python3-traitlets python3-typeshed python3-wcwidth 591s python3-webencodings python3-zmq sphinx-rtd-theme-common 591s 0 upgraded, 126 newly installed, 0 to remove and 0 not upgraded. 591s Need to get 97.1 MB of archives. 591s After this operation, 398 MB of additional disk space will be used. 591s Get:1 http://ftpmaster.internal/ubuntu oracular/main amd64 fonts-lato all 2.015-1 [2781 kB] 591s Get:2 http://ftpmaster.internal/ubuntu oracular/main amd64 libdebuginfod-common all 0.190-1.1build4 [14.2 kB] 591s Get:3 http://ftpmaster.internal/ubuntu oracular/main amd64 gcc-13-base amd64 13.2.0-23ubuntu4 [49.0 kB] 591s Get:4 http://ftpmaster.internal/ubuntu oracular/main amd64 libisl23 amd64 0.26-3build1 [680 kB] 591s Get:5 http://ftpmaster.internal/ubuntu oracular/main amd64 libmpc3 amd64 1.3.1-1build1 [54.5 kB] 591s Get:6 http://ftpmaster.internal/ubuntu oracular/main amd64 cpp-13-x86-64-linux-gnu amd64 13.2.0-23ubuntu4 [11.2 MB] 591s Get:7 http://ftpmaster.internal/ubuntu oracular/main amd64 cpp-13 amd64 13.2.0-23ubuntu4 [1032 B] 591s Get:8 http://ftpmaster.internal/ubuntu oracular/main amd64 cpp-x86-64-linux-gnu amd64 4:13.2.0-7ubuntu1 [5326 B] 591s Get:9 http://ftpmaster.internal/ubuntu oracular/main amd64 cpp amd64 4:13.2.0-7ubuntu1 [22.4 kB] 591s Get:10 http://ftpmaster.internal/ubuntu oracular/main amd64 libcc1-0 amd64 14-20240412-0ubuntu1 [47.7 kB] 591s Get:11 http://ftpmaster.internal/ubuntu oracular/main amd64 libgomp1 amd64 14-20240412-0ubuntu1 [147 kB] 591s Get:12 http://ftpmaster.internal/ubuntu oracular/main amd64 libitm1 amd64 14-20240412-0ubuntu1 [28.9 kB] 591s Get:13 http://ftpmaster.internal/ubuntu oracular/main amd64 libatomic1 amd64 14-20240412-0ubuntu1 [10.4 kB] 591s Get:14 http://ftpmaster.internal/ubuntu oracular/main amd64 libasan8 amd64 14-20240412-0ubuntu1 [3024 kB] 591s Get:15 http://ftpmaster.internal/ubuntu oracular/main amd64 liblsan0 amd64 14-20240412-0ubuntu1 [1313 kB] 591s Get:16 http://ftpmaster.internal/ubuntu oracular/main amd64 libtsan2 amd64 14-20240412-0ubuntu1 [2736 kB] 591s Get:17 http://ftpmaster.internal/ubuntu oracular/main amd64 libubsan1 amd64 14-20240412-0ubuntu1 [1175 kB] 591s Get:18 http://ftpmaster.internal/ubuntu oracular/main amd64 libhwasan0 amd64 14-20240412-0ubuntu1 [1632 kB] 591s Get:19 http://ftpmaster.internal/ubuntu oracular/main amd64 libquadmath0 amd64 14-20240412-0ubuntu1 [153 kB] 591s Get:20 http://ftpmaster.internal/ubuntu oracular/main amd64 libgcc-13-dev amd64 13.2.0-23ubuntu4 [2688 kB] 591s Get:21 http://ftpmaster.internal/ubuntu oracular/main amd64 gcc-13-x86-64-linux-gnu amd64 13.2.0-23ubuntu4 [21.9 MB] 591s Get:22 http://ftpmaster.internal/ubuntu oracular/main amd64 gcc-13 amd64 13.2.0-23ubuntu4 [482 kB] 591s Get:23 http://ftpmaster.internal/ubuntu oracular/main amd64 gcc-x86-64-linux-gnu amd64 4:13.2.0-7ubuntu1 [1212 B] 591s Get:24 http://ftpmaster.internal/ubuntu oracular/main amd64 gcc amd64 4:13.2.0-7ubuntu1 [5018 B] 591s Get:25 http://ftpmaster.internal/ubuntu oracular/main amd64 libstdc++-13-dev amd64 13.2.0-23ubuntu4 [2399 kB] 591s Get:26 http://ftpmaster.internal/ubuntu oracular/main amd64 g++-13-x86-64-linux-gnu amd64 13.2.0-23ubuntu4 [12.5 MB] 591s Get:27 http://ftpmaster.internal/ubuntu oracular/main amd64 g++-13 amd64 13.2.0-23ubuntu4 [14.5 kB] 591s Get:28 http://ftpmaster.internal/ubuntu oracular/main amd64 g++-x86-64-linux-gnu amd64 4:13.2.0-7ubuntu1 [964 B] 591s Get:29 http://ftpmaster.internal/ubuntu oracular/main amd64 g++ amd64 4:13.2.0-7ubuntu1 [1100 B] 591s Get:30 http://ftpmaster.internal/ubuntu oracular/main amd64 build-essential amd64 12.10ubuntu1 [4928 B] 591s Get:31 http://ftpmaster.internal/ubuntu oracular/main amd64 fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 591s Get:32 http://ftpmaster.internal/ubuntu oracular/universe amd64 fonts-glyphicons-halflings all 1.009~3.4.1+dfsg-3 [118 kB] 591s Get:33 http://ftpmaster.internal/ubuntu oracular/main amd64 fonts-mathjax all 2.7.9+dfsg-1 [2208 kB] 591s Get:34 http://ftpmaster.internal/ubuntu oracular/main amd64 libbabeltrace1 amd64 1.5.11-3build3 [164 kB] 591s Get:35 http://ftpmaster.internal/ubuntu oracular/main amd64 libdebuginfod1t64 amd64 0.190-1.1build4 [17.1 kB] 591s Get:36 http://ftpmaster.internal/ubuntu oracular/main amd64 libipt2 amd64 2.0.6-1build1 [45.7 kB] 591s Get:37 http://ftpmaster.internal/ubuntu oracular/main amd64 libpython3.12t64 amd64 3.12.3-1 [2339 kB] 591s Get:38 http://ftpmaster.internal/ubuntu oracular/main amd64 libsource-highlight-common all 3.1.9-4.3build1 [64.2 kB] 591s Get:39 http://ftpmaster.internal/ubuntu oracular/main amd64 libsource-highlight4t64 amd64 3.1.9-4.3build1 [258 kB] 591s Get:40 http://ftpmaster.internal/ubuntu oracular/main amd64 gdb amd64 15.0.50.20240403-0ubuntu1 [4010 kB] 591s Get:41 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-platformdirs all 4.2.0-1 [16.1 kB] 591s Get:42 http://ftpmaster.internal/ubuntu oracular-proposed/universe amd64 python3-traitlets all 5.14.3-1 [71.3 kB] 591s Get:43 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-jupyter-core all 5.3.2-1ubuntu1 [25.5 kB] 591s Get:44 http://ftpmaster.internal/ubuntu oracular/universe amd64 jupyter-core all 5.3.2-1ubuntu1 [4044 B] 591s Get:45 http://ftpmaster.internal/ubuntu oracular/main amd64 libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 591s Get:46 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-backbone all 1.4.1~dfsg+~1.4.15-3 [185 kB] 591s Get:47 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-bootstrap all 3.4.1+dfsg-3 [129 kB] 591s Get:48 http://ftpmaster.internal/ubuntu oracular/main amd64 libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 591s Get:49 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-bootstrap-tour all 0.12.0+dfsg-5 [21.4 kB] 591s Get:50 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-codemirror all 5.65.0+~cs5.83.9-3 [755 kB] 591s Get:51 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-es6-promise all 4.2.8-12 [14.1 kB] 591s Get:52 http://ftpmaster.internal/ubuntu oracular/universe amd64 node-jed all 1.1.1-4 [15.2 kB] 591s Get:53 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-jed all 1.1.1-4 [2584 B] 591s Get:54 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-jquery-typeahead all 2.11.0+dfsg1-3 [48.9 kB] 591s Get:55 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-jquery-ui all 1.13.2+dfsg-1 [252 kB] 591s Get:56 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-marked all 4.2.3+ds+~4.0.7-3 [36.2 kB] 591s Get:57 http://ftpmaster.internal/ubuntu oracular/main amd64 libjs-mathjax all 2.7.9+dfsg-1 [5665 kB] 591s Get:58 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-moment all 2.29.4+ds-1 [147 kB] 591s Get:59 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-requirejs all 2.3.6+ds+~2.1.34-2 [201 kB] 591s Get:60 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-requirejs-text all 2.0.12-1.1 [9056 B] 591s Get:61 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-text-encoding all 0.7.0-5 [140 kB] 591s Get:62 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-xterm all 5.3.0-2 [476 kB] 591s Get:63 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-ptyprocess all 0.7.0-5 [15.1 kB] 591s Get:64 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-tornado amd64 6.4.0-1build1 [297 kB] 591s Get:65 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-terminado all 0.17.1-1 [15.9 kB] 591s Get:66 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-argon2 amd64 21.1.0-2build1 [21.0 kB] 591s Get:67 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-comm all 0.2.1-1 [7016 B] 591s Get:68 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-bytecode all 0.15.1-3 [44.7 kB] 591s Get:69 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-coverage amd64 7.4.4+dfsg1-0ubuntu2 [147 kB] 591s Get:70 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-pydevd amd64 2.10.0+ds-10ubuntu1 [637 kB] 591s Get:71 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-debugpy all 1.8.0+ds-4ubuntu4 [67.6 kB] 591s Get:72 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-decorator all 5.1.1-5 [10.1 kB] 591s Get:73 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-parso all 0.8.3-1 [67.2 kB] 591s Get:74 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-typeshed all 0.0~git20231111.6764465-3 [1274 kB] 591s Get:75 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-jedi all 0.19.1+ds1-1 [693 kB] 591s Get:76 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-matplotlib-inline all 0.1.6-2 [8784 B] 591s Get:77 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-pexpect all 4.9-2 [48.1 kB] 591s Get:78 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-wcwidth all 0.2.5+dfsg1-1.1ubuntu1 [22.5 kB] 591s Get:79 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-prompt-toolkit all 3.0.43-1 [256 kB] 591s Get:80 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-asttokens all 2.4.1-1 [20.9 kB] 591s Get:81 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-executing all 2.0.1-0.1 [23.3 kB] 591s Get:82 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-pure-eval all 0.2.2-2 [11.1 kB] 591s Get:83 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-stack-data all 0.6.3-1 [22.0 kB] 591s Get:84 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-ipython all 8.20.0-1 [561 kB] 591s Get:85 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-dateutil all 2.8.2-3ubuntu1 [79.4 kB] 591s Get:86 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-entrypoints all 0.4-2 [7146 B] 591s Get:87 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-nest-asyncio all 1.5.4-1 [6256 B] 591s Get:88 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-py all 1.11.0-2 [72.7 kB] 591s Get:89 http://ftpmaster.internal/ubuntu oracular/universe amd64 libnorm1t64 amd64 1.5.9+dfsg-3.1build1 [154 kB] 591s Get:90 http://ftpmaster.internal/ubuntu oracular/universe amd64 libpgm-5.3-0t64 amd64 5.3.128~dfsg-2.1build1 [167 kB] 591s Get:91 http://ftpmaster.internal/ubuntu oracular/main amd64 libsodium23 amd64 1.0.18-1build3 [161 kB] 591s Get:92 http://ftpmaster.internal/ubuntu oracular/universe amd64 libzmq5 amd64 4.3.5-1build2 [260 kB] 591s Get:93 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-zmq amd64 24.0.1-5build1 [286 kB] 591s Get:94 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-jupyter-client all 7.4.9-2ubuntu1 [90.5 kB] 592s Get:95 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-packaging all 24.0-1 [41.1 kB] 592s Get:96 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-psutil amd64 5.9.8-2build2 [195 kB] 592s Get:97 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-ipykernel all 6.29.3-1 [82.4 kB] 592s Get:98 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-ipython-genutils all 0.2.0-6 [22.0 kB] 592s Get:99 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-webencodings all 0.5.1-5 [11.5 kB] 592s Get:100 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-html5lib all 1.1-6 [88.8 kB] 592s Get:101 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-bleach all 6.1.0-2 [49.6 kB] 592s Get:102 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-soupsieve all 2.5-1 [33.0 kB] 592s Get:103 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-bs4 all 4.12.3-1 [109 kB] 592s Get:104 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-defusedxml all 0.7.1-2 [42.0 kB] 592s Get:105 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-jupyterlab-pygments all 0.2.2-3 [6054 B] 592s Get:106 http://ftpmaster.internal/ubuntu oracular/main amd64 libxslt1.1 amd64 1.1.39-0exp1build1 [167 kB] 592s Get:107 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-lxml amd64 5.2.1-1 [1243 kB] 592s Get:108 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-fastjsonschema all 2.19.0-1 [19.6 kB] 592s Get:109 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-nbformat all 5.9.1-1 [41.2 kB] 592s Get:110 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-nbclient all 0.8.0-1 [55.6 kB] 592s Get:111 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-pandocfilters all 1.5.1-1 [23.6 kB] 592s Get:112 http://ftpmaster.internal/ubuntu oracular/universe amd64 python-tinycss2-common all 1.2.1-2 [33.9 kB] 592s Get:113 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-tinycss2 all 1.2.1-2 [19.6 kB] 592s Get:114 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-lxml-html-clean all 0.1.1-1 [12.0 kB] 592s Get:115 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-nbconvert all 6.5.3-5 [152 kB] 592s Get:116 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-prometheus-client all 0.19.0+ds1-1 [41.7 kB] 592s Get:117 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-send2trash all 1.8.2-1 [15.5 kB] 592s Get:118 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-notebook all 6.4.12-2.2ubuntu1 [1566 kB] 592s Get:119 http://ftpmaster.internal/ubuntu oracular/universe amd64 jupyter-notebook all 6.4.12-2.2ubuntu1 [10.4 kB] 592s Get:120 http://ftpmaster.internal/ubuntu oracular/main amd64 libjs-sphinxdoc all 7.2.6-6 [149 kB] 592s Get:121 http://ftpmaster.internal/ubuntu oracular/main amd64 sphinx-rtd-theme-common all 2.0.0+dfsg-1 [1012 kB] 592s Get:122 http://ftpmaster.internal/ubuntu oracular/universe amd64 python-notebook-doc all 6.4.12-2.2ubuntu1 [2540 kB] 592s Get:123 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 592s Get:124 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-pluggy all 1.4.0-1 [20.4 kB] 592s Get:125 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-pytest all 7.4.4-1 [305 kB] 592s Get:126 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-requests-unixsocket all 0.3.0-3ubuntu3 [7438 B] 592s Preconfiguring packages ... 592s Fetched 97.1 MB in 1s (102 MB/s) 592s Selecting previously unselected package fonts-lato. 592s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 73897 files and directories currently installed.) 592s Preparing to unpack .../000-fonts-lato_2.015-1_all.deb ... 592s Unpacking fonts-lato (2.015-1) ... 593s Selecting previously unselected package libdebuginfod-common. 593s Preparing to unpack .../001-libdebuginfod-common_0.190-1.1build4_all.deb ... 593s Unpacking libdebuginfod-common (0.190-1.1build4) ... 593s Selecting previously unselected package gcc-13-base:amd64. 593s Preparing to unpack .../002-gcc-13-base_13.2.0-23ubuntu4_amd64.deb ... 593s Unpacking gcc-13-base:amd64 (13.2.0-23ubuntu4) ... 593s Selecting previously unselected package libisl23:amd64. 593s Preparing to unpack .../003-libisl23_0.26-3build1_amd64.deb ... 593s Unpacking libisl23:amd64 (0.26-3build1) ... 593s Selecting previously unselected package libmpc3:amd64. 593s Preparing to unpack .../004-libmpc3_1.3.1-1build1_amd64.deb ... 593s Unpacking libmpc3:amd64 (1.3.1-1build1) ... 593s Selecting previously unselected package cpp-13-x86-64-linux-gnu. 593s Preparing to unpack .../005-cpp-13-x86-64-linux-gnu_13.2.0-23ubuntu4_amd64.deb ... 593s Unpacking cpp-13-x86-64-linux-gnu (13.2.0-23ubuntu4) ... 593s Selecting previously unselected package cpp-13. 593s Preparing to unpack .../006-cpp-13_13.2.0-23ubuntu4_amd64.deb ... 593s Unpacking cpp-13 (13.2.0-23ubuntu4) ... 593s Selecting previously unselected package cpp-x86-64-linux-gnu. 593s Preparing to unpack .../007-cpp-x86-64-linux-gnu_4%3a13.2.0-7ubuntu1_amd64.deb ... 593s Unpacking cpp-x86-64-linux-gnu (4:13.2.0-7ubuntu1) ... 593s Selecting previously unselected package cpp. 593s Preparing to unpack .../008-cpp_4%3a13.2.0-7ubuntu1_amd64.deb ... 593s Unpacking cpp (4:13.2.0-7ubuntu1) ... 593s Selecting previously unselected package libcc1-0:amd64. 593s Preparing to unpack .../009-libcc1-0_14-20240412-0ubuntu1_amd64.deb ... 593s Unpacking libcc1-0:amd64 (14-20240412-0ubuntu1) ... 593s Selecting previously unselected package libgomp1:amd64. 593s Preparing to unpack .../010-libgomp1_14-20240412-0ubuntu1_amd64.deb ... 593s Unpacking libgomp1:amd64 (14-20240412-0ubuntu1) ... 593s Selecting previously unselected package libitm1:amd64. 593s Preparing to unpack .../011-libitm1_14-20240412-0ubuntu1_amd64.deb ... 593s Unpacking libitm1:amd64 (14-20240412-0ubuntu1) ... 593s Selecting previously unselected package libatomic1:amd64. 593s Preparing to unpack .../012-libatomic1_14-20240412-0ubuntu1_amd64.deb ... 593s Unpacking libatomic1:amd64 (14-20240412-0ubuntu1) ... 593s Selecting previously unselected package libasan8:amd64. 593s Preparing to unpack .../013-libasan8_14-20240412-0ubuntu1_amd64.deb ... 593s Unpacking libasan8:amd64 (14-20240412-0ubuntu1) ... 593s Selecting previously unselected package liblsan0:amd64. 593s Preparing to unpack .../014-liblsan0_14-20240412-0ubuntu1_amd64.deb ... 593s Unpacking liblsan0:amd64 (14-20240412-0ubuntu1) ... 593s Selecting previously unselected package libtsan2:amd64. 593s Preparing to unpack .../015-libtsan2_14-20240412-0ubuntu1_amd64.deb ... 593s Unpacking libtsan2:amd64 (14-20240412-0ubuntu1) ... 593s Selecting previously unselected package libubsan1:amd64. 593s Preparing to unpack .../016-libubsan1_14-20240412-0ubuntu1_amd64.deb ... 593s Unpacking libubsan1:amd64 (14-20240412-0ubuntu1) ... 593s Selecting previously unselected package libhwasan0:amd64. 593s Preparing to unpack .../017-libhwasan0_14-20240412-0ubuntu1_amd64.deb ... 593s Unpacking libhwasan0:amd64 (14-20240412-0ubuntu1) ... 593s Selecting previously unselected package libquadmath0:amd64. 593s Preparing to unpack .../018-libquadmath0_14-20240412-0ubuntu1_amd64.deb ... 593s Unpacking libquadmath0:amd64 (14-20240412-0ubuntu1) ... 594s Selecting previously unselected package libgcc-13-dev:amd64. 594s Preparing to unpack .../019-libgcc-13-dev_13.2.0-23ubuntu4_amd64.deb ... 594s Unpacking libgcc-13-dev:amd64 (13.2.0-23ubuntu4) ... 594s Selecting previously unselected package gcc-13-x86-64-linux-gnu. 594s Preparing to unpack .../020-gcc-13-x86-64-linux-gnu_13.2.0-23ubuntu4_amd64.deb ... 594s Unpacking gcc-13-x86-64-linux-gnu (13.2.0-23ubuntu4) ... 594s Selecting previously unselected package gcc-13. 594s Preparing to unpack .../021-gcc-13_13.2.0-23ubuntu4_amd64.deb ... 594s Unpacking gcc-13 (13.2.0-23ubuntu4) ... 594s Selecting previously unselected package gcc-x86-64-linux-gnu. 594s Preparing to unpack .../022-gcc-x86-64-linux-gnu_4%3a13.2.0-7ubuntu1_amd64.deb ... 594s Unpacking gcc-x86-64-linux-gnu (4:13.2.0-7ubuntu1) ... 594s Selecting previously unselected package gcc. 594s Preparing to unpack .../023-gcc_4%3a13.2.0-7ubuntu1_amd64.deb ... 594s Unpacking gcc (4:13.2.0-7ubuntu1) ... 594s Selecting previously unselected package libstdc++-13-dev:amd64. 594s Preparing to unpack .../024-libstdc++-13-dev_13.2.0-23ubuntu4_amd64.deb ... 594s Unpacking libstdc++-13-dev:amd64 (13.2.0-23ubuntu4) ... 594s Selecting previously unselected package g++-13-x86-64-linux-gnu. 594s Preparing to unpack .../025-g++-13-x86-64-linux-gnu_13.2.0-23ubuntu4_amd64.deb ... 594s Unpacking g++-13-x86-64-linux-gnu (13.2.0-23ubuntu4) ... 595s Selecting previously unselected package g++-13. 595s Preparing to unpack .../026-g++-13_13.2.0-23ubuntu4_amd64.deb ... 595s Unpacking g++-13 (13.2.0-23ubuntu4) ... 595s Selecting previously unselected package g++-x86-64-linux-gnu. 595s Preparing to unpack .../027-g++-x86-64-linux-gnu_4%3a13.2.0-7ubuntu1_amd64.deb ... 595s Unpacking g++-x86-64-linux-gnu (4:13.2.0-7ubuntu1) ... 595s Selecting previously unselected package g++. 595s Preparing to unpack .../028-g++_4%3a13.2.0-7ubuntu1_amd64.deb ... 595s Unpacking g++ (4:13.2.0-7ubuntu1) ... 595s Selecting previously unselected package build-essential. 595s Preparing to unpack .../029-build-essential_12.10ubuntu1_amd64.deb ... 595s Unpacking build-essential (12.10ubuntu1) ... 595s Selecting previously unselected package fonts-font-awesome. 595s Preparing to unpack .../030-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 595s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 595s Selecting previously unselected package fonts-glyphicons-halflings. 595s Preparing to unpack .../031-fonts-glyphicons-halflings_1.009~3.4.1+dfsg-3_all.deb ... 595s Unpacking fonts-glyphicons-halflings (1.009~3.4.1+dfsg-3) ... 595s Selecting previously unselected package fonts-mathjax. 595s Preparing to unpack .../032-fonts-mathjax_2.7.9+dfsg-1_all.deb ... 595s Unpacking fonts-mathjax (2.7.9+dfsg-1) ... 595s Selecting previously unselected package libbabeltrace1:amd64. 595s Preparing to unpack .../033-libbabeltrace1_1.5.11-3build3_amd64.deb ... 595s Unpacking libbabeltrace1:amd64 (1.5.11-3build3) ... 595s Selecting previously unselected package libdebuginfod1t64:amd64. 595s Preparing to unpack .../034-libdebuginfod1t64_0.190-1.1build4_amd64.deb ... 595s Unpacking libdebuginfod1t64:amd64 (0.190-1.1build4) ... 595s Selecting previously unselected package libipt2. 595s Preparing to unpack .../035-libipt2_2.0.6-1build1_amd64.deb ... 595s Unpacking libipt2 (2.0.6-1build1) ... 595s Selecting previously unselected package libpython3.12t64:amd64. 595s Preparing to unpack .../036-libpython3.12t64_3.12.3-1_amd64.deb ... 595s Unpacking libpython3.12t64:amd64 (3.12.3-1) ... 595s Selecting previously unselected package libsource-highlight-common. 595s Preparing to unpack .../037-libsource-highlight-common_3.1.9-4.3build1_all.deb ... 595s Unpacking libsource-highlight-common (3.1.9-4.3build1) ... 595s Selecting previously unselected package libsource-highlight4t64:amd64. 595s Preparing to unpack .../038-libsource-highlight4t64_3.1.9-4.3build1_amd64.deb ... 595s Unpacking libsource-highlight4t64:amd64 (3.1.9-4.3build1) ... 595s Selecting previously unselected package gdb. 595s Preparing to unpack .../039-gdb_15.0.50.20240403-0ubuntu1_amd64.deb ... 595s Unpacking gdb (15.0.50.20240403-0ubuntu1) ... 595s Selecting previously unselected package python3-platformdirs. 595s Preparing to unpack .../040-python3-platformdirs_4.2.0-1_all.deb ... 595s Unpacking python3-platformdirs (4.2.0-1) ... 595s Selecting previously unselected package python3-traitlets. 595s Preparing to unpack .../041-python3-traitlets_5.14.3-1_all.deb ... 595s Unpacking python3-traitlets (5.14.3-1) ... 595s Selecting previously unselected package python3-jupyter-core. 595s Preparing to unpack .../042-python3-jupyter-core_5.3.2-1ubuntu1_all.deb ... 595s Unpacking python3-jupyter-core (5.3.2-1ubuntu1) ... 595s Selecting previously unselected package jupyter-core. 595s Preparing to unpack .../043-jupyter-core_5.3.2-1ubuntu1_all.deb ... 595s Unpacking jupyter-core (5.3.2-1ubuntu1) ... 595s Selecting previously unselected package libjs-underscore. 595s Preparing to unpack .../044-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 595s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 595s Selecting previously unselected package libjs-backbone. 595s Preparing to unpack .../045-libjs-backbone_1.4.1~dfsg+~1.4.15-3_all.deb ... 595s Unpacking libjs-backbone (1.4.1~dfsg+~1.4.15-3) ... 595s Selecting previously unselected package libjs-bootstrap. 595s Preparing to unpack .../046-libjs-bootstrap_3.4.1+dfsg-3_all.deb ... 595s Unpacking libjs-bootstrap (3.4.1+dfsg-3) ... 595s Selecting previously unselected package libjs-jquery. 595s Preparing to unpack .../047-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 595s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 595s Selecting previously unselected package libjs-bootstrap-tour. 595s Preparing to unpack .../048-libjs-bootstrap-tour_0.12.0+dfsg-5_all.deb ... 595s Unpacking libjs-bootstrap-tour (0.12.0+dfsg-5) ... 596s Selecting previously unselected package libjs-codemirror. 596s Preparing to unpack .../049-libjs-codemirror_5.65.0+~cs5.83.9-3_all.deb ... 596s Unpacking libjs-codemirror (5.65.0+~cs5.83.9-3) ... 596s Selecting previously unselected package libjs-es6-promise. 596s Preparing to unpack .../050-libjs-es6-promise_4.2.8-12_all.deb ... 596s Unpacking libjs-es6-promise (4.2.8-12) ... 596s Selecting previously unselected package node-jed. 596s Preparing to unpack .../051-node-jed_1.1.1-4_all.deb ... 596s Unpacking node-jed (1.1.1-4) ... 596s Selecting previously unselected package libjs-jed. 596s Preparing to unpack .../052-libjs-jed_1.1.1-4_all.deb ... 596s Unpacking libjs-jed (1.1.1-4) ... 596s Selecting previously unselected package libjs-jquery-typeahead. 596s Preparing to unpack .../053-libjs-jquery-typeahead_2.11.0+dfsg1-3_all.deb ... 596s Unpacking libjs-jquery-typeahead (2.11.0+dfsg1-3) ... 596s Selecting previously unselected package libjs-jquery-ui. 596s Preparing to unpack .../054-libjs-jquery-ui_1.13.2+dfsg-1_all.deb ... 596s Unpacking libjs-jquery-ui (1.13.2+dfsg-1) ... 596s Selecting previously unselected package libjs-marked. 596s Preparing to unpack .../055-libjs-marked_4.2.3+ds+~4.0.7-3_all.deb ... 596s Unpacking libjs-marked (4.2.3+ds+~4.0.7-3) ... 596s Selecting previously unselected package libjs-mathjax. 596s Preparing to unpack .../056-libjs-mathjax_2.7.9+dfsg-1_all.deb ... 596s Unpacking libjs-mathjax (2.7.9+dfsg-1) ... 597s Selecting previously unselected package libjs-moment. 597s Preparing to unpack .../057-libjs-moment_2.29.4+ds-1_all.deb ... 597s Unpacking libjs-moment (2.29.4+ds-1) ... 597s Selecting previously unselected package libjs-requirejs. 597s Preparing to unpack .../058-libjs-requirejs_2.3.6+ds+~2.1.34-2_all.deb ... 597s Unpacking libjs-requirejs (2.3.6+ds+~2.1.34-2) ... 597s Selecting previously unselected package libjs-requirejs-text. 597s Preparing to unpack .../059-libjs-requirejs-text_2.0.12-1.1_all.deb ... 597s Unpacking libjs-requirejs-text (2.0.12-1.1) ... 597s Selecting previously unselected package libjs-text-encoding. 597s Preparing to unpack .../060-libjs-text-encoding_0.7.0-5_all.deb ... 597s Unpacking libjs-text-encoding (0.7.0-5) ... 597s Selecting previously unselected package libjs-xterm. 597s Preparing to unpack .../061-libjs-xterm_5.3.0-2_all.deb ... 597s Unpacking libjs-xterm (5.3.0-2) ... 597s Selecting previously unselected package python3-ptyprocess. 597s Preparing to unpack .../062-python3-ptyprocess_0.7.0-5_all.deb ... 597s Unpacking python3-ptyprocess (0.7.0-5) ... 597s Selecting previously unselected package python3-tornado. 597s Preparing to unpack .../063-python3-tornado_6.4.0-1build1_amd64.deb ... 597s Unpacking python3-tornado (6.4.0-1build1) ... 597s Selecting previously unselected package python3-terminado. 597s Preparing to unpack .../064-python3-terminado_0.17.1-1_all.deb ... 597s Unpacking python3-terminado (0.17.1-1) ... 597s Selecting previously unselected package python3-argon2. 597s Preparing to unpack .../065-python3-argon2_21.1.0-2build1_amd64.deb ... 597s Unpacking python3-argon2 (21.1.0-2build1) ... 597s Selecting previously unselected package python3-comm. 597s Preparing to unpack .../066-python3-comm_0.2.1-1_all.deb ... 597s Unpacking python3-comm (0.2.1-1) ... 597s Selecting previously unselected package python3-bytecode. 597s Preparing to unpack .../067-python3-bytecode_0.15.1-3_all.deb ... 597s Unpacking python3-bytecode (0.15.1-3) ... 597s Selecting previously unselected package python3-coverage. 597s Preparing to unpack .../068-python3-coverage_7.4.4+dfsg1-0ubuntu2_amd64.deb ... 597s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 597s Selecting previously unselected package python3-pydevd. 597s Preparing to unpack .../069-python3-pydevd_2.10.0+ds-10ubuntu1_amd64.deb ... 597s Unpacking python3-pydevd (2.10.0+ds-10ubuntu1) ... 597s Selecting previously unselected package python3-debugpy. 597s Preparing to unpack .../070-python3-debugpy_1.8.0+ds-4ubuntu4_all.deb ... 597s Unpacking python3-debugpy (1.8.0+ds-4ubuntu4) ... 597s Selecting previously unselected package python3-decorator. 597s Preparing to unpack .../071-python3-decorator_5.1.1-5_all.deb ... 597s Unpacking python3-decorator (5.1.1-5) ... 597s Selecting previously unselected package python3-parso. 597s Preparing to unpack .../072-python3-parso_0.8.3-1_all.deb ... 597s Unpacking python3-parso (0.8.3-1) ... 597s Selecting previously unselected package python3-typeshed. 597s Preparing to unpack .../073-python3-typeshed_0.0~git20231111.6764465-3_all.deb ... 597s Unpacking python3-typeshed (0.0~git20231111.6764465-3) ... 598s Selecting previously unselected package python3-jedi. 598s Preparing to unpack .../074-python3-jedi_0.19.1+ds1-1_all.deb ... 598s Unpacking python3-jedi (0.19.1+ds1-1) ... 598s Selecting previously unselected package python3-matplotlib-inline. 598s Preparing to unpack .../075-python3-matplotlib-inline_0.1.6-2_all.deb ... 598s Unpacking python3-matplotlib-inline (0.1.6-2) ... 598s Selecting previously unselected package python3-pexpect. 598s Preparing to unpack .../076-python3-pexpect_4.9-2_all.deb ... 598s Unpacking python3-pexpect (4.9-2) ... 598s Selecting previously unselected package python3-wcwidth. 598s Preparing to unpack .../077-python3-wcwidth_0.2.5+dfsg1-1.1ubuntu1_all.deb ... 598s Unpacking python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 598s Selecting previously unselected package python3-prompt-toolkit. 598s Preparing to unpack .../078-python3-prompt-toolkit_3.0.43-1_all.deb ... 598s Unpacking python3-prompt-toolkit (3.0.43-1) ... 598s Selecting previously unselected package python3-asttokens. 598s Preparing to unpack .../079-python3-asttokens_2.4.1-1_all.deb ... 598s Unpacking python3-asttokens (2.4.1-1) ... 598s Selecting previously unselected package python3-executing. 598s Preparing to unpack .../080-python3-executing_2.0.1-0.1_all.deb ... 598s Unpacking python3-executing (2.0.1-0.1) ... 598s Selecting previously unselected package python3-pure-eval. 598s Preparing to unpack .../081-python3-pure-eval_0.2.2-2_all.deb ... 598s Unpacking python3-pure-eval (0.2.2-2) ... 598s Selecting previously unselected package python3-stack-data. 598s Preparing to unpack .../082-python3-stack-data_0.6.3-1_all.deb ... 598s Unpacking python3-stack-data (0.6.3-1) ... 598s Selecting previously unselected package python3-ipython. 598s Preparing to unpack .../083-python3-ipython_8.20.0-1_all.deb ... 598s Unpacking python3-ipython (8.20.0-1) ... 598s Selecting previously unselected package python3-dateutil. 598s Preparing to unpack .../084-python3-dateutil_2.8.2-3ubuntu1_all.deb ... 598s Unpacking python3-dateutil (2.8.2-3ubuntu1) ... 599s Selecting previously unselected package python3-entrypoints. 599s Preparing to unpack .../085-python3-entrypoints_0.4-2_all.deb ... 599s Unpacking python3-entrypoints (0.4-2) ... 599s Selecting previously unselected package python3-nest-asyncio. 599s Preparing to unpack .../086-python3-nest-asyncio_1.5.4-1_all.deb ... 599s Unpacking python3-nest-asyncio (1.5.4-1) ... 599s Selecting previously unselected package python3-py. 599s Preparing to unpack .../087-python3-py_1.11.0-2_all.deb ... 599s Unpacking python3-py (1.11.0-2) ... 599s Selecting previously unselected package libnorm1t64:amd64. 599s Preparing to unpack .../088-libnorm1t64_1.5.9+dfsg-3.1build1_amd64.deb ... 599s Unpacking libnorm1t64:amd64 (1.5.9+dfsg-3.1build1) ... 599s Selecting previously unselected package libpgm-5.3-0t64:amd64. 599s Preparing to unpack .../089-libpgm-5.3-0t64_5.3.128~dfsg-2.1build1_amd64.deb ... 599s Unpacking libpgm-5.3-0t64:amd64 (5.3.128~dfsg-2.1build1) ... 599s Selecting previously unselected package libsodium23:amd64. 599s Preparing to unpack .../090-libsodium23_1.0.18-1build3_amd64.deb ... 599s Unpacking libsodium23:amd64 (1.0.18-1build3) ... 599s Selecting previously unselected package libzmq5:amd64. 599s Preparing to unpack .../091-libzmq5_4.3.5-1build2_amd64.deb ... 599s Unpacking libzmq5:amd64 (4.3.5-1build2) ... 599s Selecting previously unselected package python3-zmq. 599s Preparing to unpack .../092-python3-zmq_24.0.1-5build1_amd64.deb ... 599s Unpacking python3-zmq (24.0.1-5build1) ... 599s Selecting previously unselected package python3-jupyter-client. 599s Preparing to unpack .../093-python3-jupyter-client_7.4.9-2ubuntu1_all.deb ... 599s Unpacking python3-jupyter-client (7.4.9-2ubuntu1) ... 599s Selecting previously unselected package python3-packaging. 599s Preparing to unpack .../094-python3-packaging_24.0-1_all.deb ... 599s Unpacking python3-packaging (24.0-1) ... 599s Selecting previously unselected package python3-psutil. 599s Preparing to unpack .../095-python3-psutil_5.9.8-2build2_amd64.deb ... 599s Unpacking python3-psutil (5.9.8-2build2) ... 599s Selecting previously unselected package python3-ipykernel. 599s Preparing to unpack .../096-python3-ipykernel_6.29.3-1_all.deb ... 599s Unpacking python3-ipykernel (6.29.3-1) ... 599s Selecting previously unselected package python3-ipython-genutils. 599s Preparing to unpack .../097-python3-ipython-genutils_0.2.0-6_all.deb ... 599s Unpacking python3-ipython-genutils (0.2.0-6) ... 599s Selecting previously unselected package python3-webencodings. 599s Preparing to unpack .../098-python3-webencodings_0.5.1-5_all.deb ... 599s Unpacking python3-webencodings (0.5.1-5) ... 599s Selecting previously unselected package python3-html5lib. 599s Preparing to unpack .../099-python3-html5lib_1.1-6_all.deb ... 599s Unpacking python3-html5lib (1.1-6) ... 599s Selecting previously unselected package python3-bleach. 599s Preparing to unpack .../100-python3-bleach_6.1.0-2_all.deb ... 599s Unpacking python3-bleach (6.1.0-2) ... 599s Selecting previously unselected package python3-soupsieve. 599s Preparing to unpack .../101-python3-soupsieve_2.5-1_all.deb ... 599s Unpacking python3-soupsieve (2.5-1) ... 599s Selecting previously unselected package python3-bs4. 599s Preparing to unpack .../102-python3-bs4_4.12.3-1_all.deb ... 599s Unpacking python3-bs4 (4.12.3-1) ... 599s Selecting previously unselected package python3-defusedxml. 599s Preparing to unpack .../103-python3-defusedxml_0.7.1-2_all.deb ... 599s Unpacking python3-defusedxml (0.7.1-2) ... 599s Selecting previously unselected package python3-jupyterlab-pygments. 599s Preparing to unpack .../104-python3-jupyterlab-pygments_0.2.2-3_all.deb ... 599s Unpacking python3-jupyterlab-pygments (0.2.2-3) ... 599s Selecting previously unselected package libxslt1.1:amd64. 599s Preparing to unpack .../105-libxslt1.1_1.1.39-0exp1build1_amd64.deb ... 599s Unpacking libxslt1.1:amd64 (1.1.39-0exp1build1) ... 599s Selecting previously unselected package python3-lxml:amd64. 599s Preparing to unpack .../106-python3-lxml_5.2.1-1_amd64.deb ... 599s Unpacking python3-lxml:amd64 (5.2.1-1) ... 599s Selecting previously unselected package python3-fastjsonschema. 599s Preparing to unpack .../107-python3-fastjsonschema_2.19.0-1_all.deb ... 599s Unpacking python3-fastjsonschema (2.19.0-1) ... 599s Selecting previously unselected package python3-nbformat. 599s Preparing to unpack .../108-python3-nbformat_5.9.1-1_all.deb ... 599s Unpacking python3-nbformat (5.9.1-1) ... 599s Selecting previously unselected package python3-nbclient. 599s Preparing to unpack .../109-python3-nbclient_0.8.0-1_all.deb ... 599s Unpacking python3-nbclient (0.8.0-1) ... 599s Selecting previously unselected package python3-pandocfilters. 599s Preparing to unpack .../110-python3-pandocfilters_1.5.1-1_all.deb ... 599s Unpacking python3-pandocfilters (1.5.1-1) ... 599s Selecting previously unselected package python-tinycss2-common. 599s Preparing to unpack .../111-python-tinycss2-common_1.2.1-2_all.deb ... 599s Unpacking python-tinycss2-common (1.2.1-2) ... 599s Selecting previously unselected package python3-tinycss2. 599s Preparing to unpack .../112-python3-tinycss2_1.2.1-2_all.deb ... 599s Unpacking python3-tinycss2 (1.2.1-2) ... 599s Selecting previously unselected package python3-lxml-html-clean. 599s Preparing to unpack .../113-python3-lxml-html-clean_0.1.1-1_all.deb ... 599s Unpacking python3-lxml-html-clean (0.1.1-1) ... 599s Selecting previously unselected package python3-nbconvert. 599s Preparing to unpack .../114-python3-nbconvert_6.5.3-5_all.deb ... 599s Unpacking python3-nbconvert (6.5.3-5) ... 600s Selecting previously unselected package python3-prometheus-client. 600s Preparing to unpack .../115-python3-prometheus-client_0.19.0+ds1-1_all.deb ... 600s Unpacking python3-prometheus-client (0.19.0+ds1-1) ... 600s Selecting previously unselected package python3-send2trash. 600s Preparing to unpack .../116-python3-send2trash_1.8.2-1_all.deb ... 600s Unpacking python3-send2trash (1.8.2-1) ... 600s Selecting previously unselected package python3-notebook. 600s Preparing to unpack .../117-python3-notebook_6.4.12-2.2ubuntu1_all.deb ... 600s Unpacking python3-notebook (6.4.12-2.2ubuntu1) ... 600s Selecting previously unselected package jupyter-notebook. 600s Preparing to unpack .../118-jupyter-notebook_6.4.12-2.2ubuntu1_all.deb ... 600s Unpacking jupyter-notebook (6.4.12-2.2ubuntu1) ... 600s Selecting previously unselected package libjs-sphinxdoc. 600s Preparing to unpack .../119-libjs-sphinxdoc_7.2.6-6_all.deb ... 600s Unpacking libjs-sphinxdoc (7.2.6-6) ... 600s Selecting previously unselected package sphinx-rtd-theme-common. 600s Preparing to unpack .../120-sphinx-rtd-theme-common_2.0.0+dfsg-1_all.deb ... 600s Unpacking sphinx-rtd-theme-common (2.0.0+dfsg-1) ... 600s Selecting previously unselected package python-notebook-doc. 600s Preparing to unpack .../121-python-notebook-doc_6.4.12-2.2ubuntu1_all.deb ... 600s Unpacking python-notebook-doc (6.4.12-2.2ubuntu1) ... 600s Selecting previously unselected package python3-iniconfig. 600s Preparing to unpack .../122-python3-iniconfig_1.1.1-2_all.deb ... 600s Unpacking python3-iniconfig (1.1.1-2) ... 600s Selecting previously unselected package python3-pluggy. 600s Preparing to unpack .../123-python3-pluggy_1.4.0-1_all.deb ... 600s Unpacking python3-pluggy (1.4.0-1) ... 600s Selecting previously unselected package python3-pytest. 600s Preparing to unpack .../124-python3-pytest_7.4.4-1_all.deb ... 600s Unpacking python3-pytest (7.4.4-1) ... 600s Selecting previously unselected package python3-requests-unixsocket. 600s Preparing to unpack .../125-python3-requests-unixsocket_0.3.0-3ubuntu3_all.deb ... 600s Unpacking python3-requests-unixsocket (0.3.0-3ubuntu3) ... 600s Setting up python3-entrypoints (0.4-2) ... 600s Setting up libjs-jquery-typeahead (2.11.0+dfsg1-3) ... 600s Setting up python3-iniconfig (1.1.1-2) ... 600s Setting up python3-tornado (6.4.0-1build1) ... 601s Setting up libnorm1t64:amd64 (1.5.9+dfsg-3.1build1) ... 601s Setting up python3-pure-eval (0.2.2-2) ... 601s Setting up python3-send2trash (1.8.2-1) ... 601s Setting up fonts-lato (2.015-1) ... 601s Setting up fonts-mathjax (2.7.9+dfsg-1) ... 601s Setting up libsodium23:amd64 (1.0.18-1build3) ... 601s Setting up libjs-mathjax (2.7.9+dfsg-1) ... 601s Setting up python3-py (1.11.0-2) ... 601s Setting up libdebuginfod-common (0.190-1.1build4) ... 601s Setting up libjs-requirejs-text (2.0.12-1.1) ... 601s Setting up python3-parso (0.8.3-1) ... 601s Setting up python3-defusedxml (0.7.1-2) ... 601s Setting up python3-ipython-genutils (0.2.0-6) ... 602s Setting up python3-asttokens (2.4.1-1) ... 602s Setting up fonts-glyphicons-halflings (1.009~3.4.1+dfsg-3) ... 602s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 602s Setting up libjs-moment (2.29.4+ds-1) ... 602s Setting up python3-pandocfilters (1.5.1-1) ... 602s Setting up libgomp1:amd64 (14-20240412-0ubuntu1) ... 602s Setting up libjs-requirejs (2.3.6+ds+~2.1.34-2) ... 602s Setting up libjs-es6-promise (4.2.8-12) ... 602s Setting up libjs-text-encoding (0.7.0-5) ... 602s Setting up python3-webencodings (0.5.1-5) ... 602s Setting up python3-platformdirs (4.2.0-1) ... 602s Setting up python3-psutil (5.9.8-2build2) ... 602s Setting up libsource-highlight-common (3.1.9-4.3build1) ... 602s Setting up python3-requests-unixsocket (0.3.0-3ubuntu3) ... 603s Setting up python3-jupyterlab-pygments (0.2.2-3) ... 603s Setting up libpython3.12t64:amd64 (3.12.3-1) ... 603s Setting up libpgm-5.3-0t64:amd64 (5.3.128~dfsg-2.1build1) ... 603s Setting up python3-decorator (5.1.1-5) ... 603s Setting up python3-packaging (24.0-1) ... 603s Setting up gcc-13-base:amd64 (13.2.0-23ubuntu4) ... 603s Setting up python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 603s Setting up node-jed (1.1.1-4) ... 603s Setting up python3-typeshed (0.0~git20231111.6764465-3) ... 603s Setting up python3-executing (2.0.1-0.1) ... 603s Setting up libjs-xterm (5.3.0-2) ... 603s Setting up python3-nest-asyncio (1.5.4-1) ... 603s Setting up libquadmath0:amd64 (14-20240412-0ubuntu1) ... 603s Setting up python3-bytecode (0.15.1-3) ... 603s Setting up libjs-codemirror (5.65.0+~cs5.83.9-3) ... 603s Setting up libmpc3:amd64 (1.3.1-1build1) ... 603s Setting up libatomic1:amd64 (14-20240412-0ubuntu1) ... 603s Setting up libjs-jed (1.1.1-4) ... 603s Setting up libipt2 (2.0.6-1build1) ... 603s Setting up python3-html5lib (1.1-6) ... 604s Setting up libbabeltrace1:amd64 (1.5.11-3build3) ... 604s Setting up python3-pluggy (1.4.0-1) ... 604s Setting up libubsan1:amd64 (14-20240412-0ubuntu1) ... 604s Setting up python3-fastjsonschema (2.19.0-1) ... 604s Setting up libhwasan0:amd64 (14-20240412-0ubuntu1) ... 604s Setting up python3-traitlets (5.14.3-1) ... 604s Setting up libasan8:amd64 (14-20240412-0ubuntu1) ... 604s Setting up python-tinycss2-common (1.2.1-2) ... 604s Setting up libxslt1.1:amd64 (1.1.39-0exp1build1) ... 604s Setting up python3-argon2 (21.1.0-2build1) ... 604s Setting up python3-dateutil (2.8.2-3ubuntu1) ... 604s Setting up libtsan2:amd64 (14-20240412-0ubuntu1) ... 604s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 604s Setting up libisl23:amd64 (0.26-3build1) ... 604s Setting up python3-stack-data (0.6.3-1) ... 604s Setting up python3-soupsieve (2.5-1) ... 605s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 605s Setting up sphinx-rtd-theme-common (2.0.0+dfsg-1) ... 605s Setting up libcc1-0:amd64 (14-20240412-0ubuntu1) ... 605s Setting up python3-jupyter-core (5.3.2-1ubuntu1) ... 605s Setting up liblsan0:amd64 (14-20240412-0ubuntu1) ... 605s Setting up libjs-bootstrap (3.4.1+dfsg-3) ... 605s Setting up libitm1:amd64 (14-20240412-0ubuntu1) ... 605s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 605s Setting up python3-ptyprocess (0.7.0-5) ... 605s Setting up libjs-marked (4.2.3+ds+~4.0.7-3) ... 605s Setting up python3-prompt-toolkit (3.0.43-1) ... 605s Setting up libdebuginfod1t64:amd64 (0.190-1.1build4) ... 605s Setting up python3-tinycss2 (1.2.1-2) ... 605s Setting up libzmq5:amd64 (4.3.5-1build2) ... 605s Setting up python3-jedi (0.19.1+ds1-1) ... 606s Setting up cpp-13-x86-64-linux-gnu (13.2.0-23ubuntu4) ... 606s Setting up python3-pytest (7.4.4-1) ... 606s Setting up libjs-bootstrap-tour (0.12.0+dfsg-5) ... 606s Setting up libjs-backbone (1.4.1~dfsg+~1.4.15-3) ... 606s Setting up libsource-highlight4t64:amd64 (3.1.9-4.3build1) ... 606s Setting up python3-nbformat (5.9.1-1) ... 606s Setting up python3-bs4 (4.12.3-1) ... 606s Setting up python3-bleach (6.1.0-2) ... 606s Setting up python3-matplotlib-inline (0.1.6-2) ... 607s Setting up python3-comm (0.2.1-1) ... 607s Setting up python3-prometheus-client (0.19.0+ds1-1) ... 607s Setting up gdb (15.0.50.20240403-0ubuntu1) ... 607s Setting up libjs-jquery-ui (1.13.2+dfsg-1) ... 607s Setting up python3-pexpect (4.9-2) ... 607s Setting up python3-zmq (24.0.1-5build1) ... 607s Setting up libjs-sphinxdoc (7.2.6-6) ... 607s Setting up python3-terminado (0.17.1-1) ... 607s Setting up libgcc-13-dev:amd64 (13.2.0-23ubuntu4) ... 607s Setting up python3-lxml:amd64 (5.2.1-1) ... 607s Setting up python3-jupyter-client (7.4.9-2ubuntu1) ... 608s Setting up jupyter-core (5.3.2-1ubuntu1) ... 608s Setting up python3-pydevd (2.10.0+ds-10ubuntu1) ... 608s Setting up libstdc++-13-dev:amd64 (13.2.0-23ubuntu4) ... 608s Setting up cpp-x86-64-linux-gnu (4:13.2.0-7ubuntu1) ... 608s Setting up cpp-13 (13.2.0-23ubuntu4) ... 608s Setting up gcc-13-x86-64-linux-gnu (13.2.0-23ubuntu4) ... 608s Setting up python3-debugpy (1.8.0+ds-4ubuntu4) ... 608s Setting up python-notebook-doc (6.4.12-2.2ubuntu1) ... 608s Setting up python3-nbclient (0.8.0-1) ... 609s Setting up python3-ipython (8.20.0-1) ... 609s Setting up python3-ipykernel (6.29.3-1) ... 609s Setting up gcc-13 (13.2.0-23ubuntu4) ... 609s Setting up python3-lxml-html-clean (0.1.1-1) ... 609s Setting up python3-nbconvert (6.5.3-5) ... 609s Setting up cpp (4:13.2.0-7ubuntu1) ... 609s Setting up g++-13-x86-64-linux-gnu (13.2.0-23ubuntu4) ... 609s Setting up gcc-x86-64-linux-gnu (4:13.2.0-7ubuntu1) ... 609s Setting up python3-notebook (6.4.12-2.2ubuntu1) ... 610s Setting up gcc (4:13.2.0-7ubuntu1) ... 610s Setting up g++-x86-64-linux-gnu (4:13.2.0-7ubuntu1) ... 610s Setting up g++-13 (13.2.0-23ubuntu4) ... 610s Setting up jupyter-notebook (6.4.12-2.2ubuntu1) ... 610s Setting up g++ (4:13.2.0-7ubuntu1) ... 610s update-alternatives: using /usr/bin/g++ to provide /usr/bin/c++ (c++) in auto mode 610s Setting up build-essential (12.10ubuntu1) ... 610s Processing triggers for man-db (2.12.0-4build2) ... 611s Processing triggers for libc-bin (2.39-0ubuntu8) ... 611s Reading package lists... 612s Building dependency tree... 612s Reading state information... 612s Starting pkgProblemResolver with broken count: 0 612s Starting 2 pkgProblemResolver with broken count: 0 612s Done 612s The following NEW packages will be installed: 612s autopkgtest-satdep 612s 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 612s Need to get 0 B/696 B of archives. 612s After this operation, 0 B of additional disk space will be used. 612s Get:1 /tmp/autopkgtest.FMSSaJ/2-autopkgtest-satdep.deb autopkgtest-satdep amd64 0 [696 B] 613s Selecting previously unselected package autopkgtest-satdep. 613s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 91851 files and directories currently installed.) 613s Preparing to unpack .../2-autopkgtest-satdep.deb ... 613s Unpacking autopkgtest-satdep (0) ... 613s Setting up autopkgtest-satdep (0) ... 615s (Reading database ... 91851 files and directories currently installed.) 615s Removing autopkgtest-satdep (0) ... 615s autopkgtest [23:19:43]: test pytest: [----------------------- 616s ============================= test session starts ============================== 616s platform linux -- Python 3.12.3, pytest-7.4.4, pluggy-1.4.0 616s rootdir: /tmp/autopkgtest.FMSSaJ/build.uPX/src 616s collected 330 items / 5 deselected / 325 selected 616s 617s notebook/auth/tests/test_login.py EE [ 0%] 617s notebook/auth/tests/test_security.py .... [ 1%] 618s notebook/bundler/tests/test_bundler_api.py EEEEE [ 3%] 618s notebook/bundler/tests/test_bundler_tools.py ............. [ 7%] 618s notebook/bundler/tests/test_bundlerextension.py ... [ 8%] 618s notebook/nbconvert/tests/test_nbconvert_handlers.py ssssss [ 10%] 619s notebook/services/api/tests/test_api.py EEE [ 11%] 619s notebook/services/config/tests/test_config_api.py EEE [ 12%] 621s notebook/services/contents/tests/test_contents_api.py EsEEEEEEEEEEssEEsE [ 17%] 629s EEEEEEEEEEEEEEEEEEEEEEEEEsEEEEEEEEEEEssEEsEEEEEEEEEEEEEEEEEEEEEEEEE [ 38%] 629s notebook/services/contents/tests/test_fileio.py ... [ 39%] 629s notebook/services/contents/tests/test_largefilemanager.py . [ 39%] 629s notebook/services/contents/tests/test_manager.py .....s........ss....... [ 46%] 630s ...ss........ [ 50%] 631s notebook/services/kernels/tests/test_kernels_api.py EEEEEEEEEEEE [ 54%] 632s notebook/services/kernelspecs/tests/test_kernelspecs_api.py EEEEEEE [ 56%] 632s notebook/services/nbconvert/tests/test_nbconvert_api.py E [ 56%] 634s notebook/services/sessions/tests/test_sessionmanager.py FFFFFFFFF [ 59%] 636s notebook/services/sessions/tests/test_sessions_api.py EEEEEEEEEEEEEEEEEE [ 64%] 636s EEEE [ 66%] 637s notebook/terminal/tests/test_terminals_api.py EEEEEEEE [ 68%] 637s notebook/tests/test_config_manager.py . [ 68%] 638s notebook/tests/test_files.py EEEEE [ 70%] 639s notebook/tests/test_gateway.py EEEEEE [ 72%] 639s notebook/tests/test_i18n.py . [ 72%] 639s notebook/tests/test_log.py . [ 72%] 640s notebook/tests/test_nbextensions.py ................................... [ 83%] 643s notebook/tests/test_notebookapp.py FFFFFFFFF........F.EEEEEEE [ 91%] 643s notebook/tests/test_paths.py ..E [ 92%] 643s notebook/tests/test_serialize.py .. [ 93%] 644s notebook/tests/test_serverextensions.py ...FF [ 94%] 644s notebook/tests/test_traittypes.py ........... [ 98%] 644s notebook/tests/test_utils.py F...s [ 99%] 644s notebook/tree/tests/test_tree_handler.py E [100%] 644s 644s ==================================== ERRORS ==================================== 644s __________________ ERROR at setup of LoginTest.test_next_bad ___________________ 644s 644s self = 644s 644s def _new_conn(self) -> socket.socket: 644s """Establish a socket connection and set nodelay settings on it. 644s 644s :return: New socket connection. 644s """ 644s try: 644s > sock = connection.create_connection( 644s (self._dns_host, self.port), 644s self.timeout, 644s source_address=self.source_address, 644s socket_options=self.socket_options, 644s ) 644s 644s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 644s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 644s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 644s raise err 644s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 644s 644s address = ('localhost', 12341), timeout = None, source_address = None 644s socket_options = [(6, 1, 1)] 644s 644s def create_connection( 644s address: tuple[str, int], 644s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 644s source_address: tuple[str, int] | None = None, 644s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 644s ) -> socket.socket: 644s """Connect to *address* and return the socket object. 644s 644s Convenience function. Connect to *address* (a 2-tuple ``(host, 644s port)``) and return the socket object. Passing the optional 644s *timeout* parameter will set the timeout on the socket instance 644s before attempting to connect. If no *timeout* is supplied, the 644s global default timeout setting returned by :func:`socket.getdefaulttimeout` 644s is used. If *source_address* is set it must be a tuple of (host, port) 644s for the socket to bind as a source address before making the connection. 644s An host of '' or port 0 tells the OS to use the default. 644s """ 644s 644s host, port = address 644s if host.startswith("["): 644s host = host.strip("[]") 644s err = None 644s 644s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 644s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 644s # The original create_connection function always returns all records. 644s family = allowed_gai_family() 644s 644s try: 644s host.encode("idna") 644s except UnicodeError: 644s raise LocationParseError(f"'{host}', label empty or too long") from None 644s 644s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 644s af, socktype, proto, canonname, sa = res 644s sock = None 644s try: 644s sock = socket.socket(af, socktype, proto) 644s 644s # If provided, set socket level options before connecting. 644s _set_socket_options(sock, socket_options) 644s 644s if timeout is not _DEFAULT_TIMEOUT: 644s sock.settimeout(timeout) 644s if source_address: 644s sock.bind(source_address) 644s > sock.connect(sa) 644s E ConnectionRefusedError: [Errno 111] Connection refused 644s 644s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 644s 644s The above exception was the direct cause of the following exception: 644s 644s self = 644s method = 'GET', url = '/a%40b/api/contents', body = None 644s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 644s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 644s redirect = False, assert_same_host = False 644s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 644s release_conn = False, chunked = False, body_pos = None, preload_content = False 644s decode_content = False, response_kw = {} 644s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 644s destination_scheme = None, conn = None, release_this_conn = True 644s http_tunnel_required = False, err = None, clean_exit = False 644s 644s def urlopen( # type: ignore[override] 644s self, 644s method: str, 644s url: str, 644s body: _TYPE_BODY | None = None, 644s headers: typing.Mapping[str, str] | None = None, 644s retries: Retry | bool | int | None = None, 644s redirect: bool = True, 644s assert_same_host: bool = True, 644s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 644s pool_timeout: int | None = None, 644s release_conn: bool | None = None, 644s chunked: bool = False, 644s body_pos: _TYPE_BODY_POSITION | None = None, 644s preload_content: bool = True, 644s decode_content: bool = True, 644s **response_kw: typing.Any, 644s ) -> BaseHTTPResponse: 644s """ 644s Get a connection from the pool and perform an HTTP request. This is the 644s lowest level call for making a request, so you'll need to specify all 644s the raw details. 644s 644s .. note:: 644s 644s More commonly, it's appropriate to use a convenience method 644s such as :meth:`request`. 644s 644s .. note:: 644s 644s `release_conn` will only behave as expected if 644s `preload_content=False` because we want to make 644s `preload_content=False` the default behaviour someday soon without 644s breaking backwards compatibility. 644s 644s :param method: 644s HTTP request method (such as GET, POST, PUT, etc.) 644s 644s :param url: 644s The URL to perform the request on. 644s 644s :param body: 644s Data to send in the request body, either :class:`str`, :class:`bytes`, 644s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 644s 644s :param headers: 644s Dictionary of custom headers to send, such as User-Agent, 644s If-None-Match, etc. If None, pool headers are used. If provided, 644s these headers completely replace any pool-specific headers. 644s 644s :param retries: 644s Configure the number of retries to allow before raising a 644s :class:`~urllib3.exceptions.MaxRetryError` exception. 644s 644s Pass ``None`` to retry until you receive a response. Pass a 644s :class:`~urllib3.util.retry.Retry` object for fine-grained control 644s over different types of retries. 644s Pass an integer number to retry connection errors that many times, 644s but no other types of errors. Pass zero to never retry. 644s 644s If ``False``, then retries are disabled and any exception is raised 644s immediately. Also, instead of raising a MaxRetryError on redirects, 644s the redirect response will be returned. 644s 644s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 644s 644s :param redirect: 644s If True, automatically handle redirects (status codes 301, 302, 644s 303, 307, 308). Each redirect counts as a retry. Disabling retries 644s will disable redirect, too. 644s 644s :param assert_same_host: 644s If ``True``, will make sure that the host of the pool requests is 644s consistent else will raise HostChangedError. When ``False``, you can 644s use the pool on an HTTP proxy and request foreign hosts. 644s 644s :param timeout: 644s If specified, overrides the default timeout for this one 644s request. It may be a float (in seconds) or an instance of 644s :class:`urllib3.util.Timeout`. 644s 644s :param pool_timeout: 644s If set and the pool is set to block=True, then this method will 644s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 644s connection is available within the time period. 644s 644s :param bool preload_content: 644s If True, the response's body will be preloaded into memory. 644s 644s :param bool decode_content: 644s If True, will attempt to decode the body based on the 644s 'content-encoding' header. 644s 644s :param release_conn: 644s If False, then the urlopen call will not release the connection 644s back into the pool once a response is received (but will release if 644s you read the entire contents of the response such as when 644s `preload_content=True`). This is useful if you're not preloading 644s the response's content immediately. You will need to call 644s ``r.release_conn()`` on the response ``r`` to return the connection 644s back into the pool. If None, it takes the value of ``preload_content`` 644s which defaults to ``True``. 644s 644s :param bool chunked: 644s If True, urllib3 will send the body using chunked transfer 644s encoding. Otherwise, urllib3 will send the body using the standard 644s content-length form. Defaults to False. 644s 644s :param int body_pos: 644s Position to seek to in file-like body in the event of a retry or 644s redirect. Typically this won't need to be set because urllib3 will 644s auto-populate the value when needed. 644s """ 644s parsed_url = parse_url(url) 644s destination_scheme = parsed_url.scheme 644s 644s if headers is None: 644s headers = self.headers 644s 644s if not isinstance(retries, Retry): 644s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 644s 644s if release_conn is None: 644s release_conn = preload_content 644s 644s # Check host 644s if assert_same_host and not self.is_same_host(url): 644s raise HostChangedError(self, url, retries) 644s 644s # Ensure that the URL we're connecting to is properly encoded 644s if url.startswith("/"): 644s url = to_str(_encode_target(url)) 644s else: 644s url = to_str(parsed_url.url) 644s 644s conn = None 644s 644s # Track whether `conn` needs to be released before 644s # returning/raising/recursing. Update this variable if necessary, and 644s # leave `release_conn` constant throughout the function. That way, if 644s # the function recurses, the original value of `release_conn` will be 644s # passed down into the recursive call, and its value will be respected. 644s # 644s # See issue #651 [1] for details. 644s # 644s # [1] 644s release_this_conn = release_conn 644s 644s http_tunnel_required = connection_requires_http_tunnel( 644s self.proxy, self.proxy_config, destination_scheme 644s ) 644s 644s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 644s # have to copy the headers dict so we can safely change it without those 644s # changes being reflected in anyone else's copy. 644s if not http_tunnel_required: 644s headers = headers.copy() # type: ignore[attr-defined] 644s headers.update(self.proxy_headers) # type: ignore[union-attr] 644s 644s # Must keep the exception bound to a separate variable or else Python 3 644s # complains about UnboundLocalError. 644s err = None 644s 644s # Keep track of whether we cleanly exited the except block. This 644s # ensures we do proper cleanup in finally. 644s clean_exit = False 644s 644s # Rewind body position, if needed. Record current position 644s # for future rewinds in the event of a redirect/retry. 644s body_pos = set_file_position(body, body_pos) 644s 644s try: 644s # Request a connection from the queue. 644s timeout_obj = self._get_timeout(timeout) 644s conn = self._get_conn(timeout=pool_timeout) 644s 644s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 644s 644s # Is this a closed/new connection that requires CONNECT tunnelling? 644s if self.proxy is not None and http_tunnel_required and conn.is_closed: 644s try: 644s self._prepare_proxy(conn) 644s except (BaseSSLError, OSError, SocketTimeout) as e: 644s self._raise_timeout( 644s err=e, url=self.proxy.url, timeout_value=conn.timeout 644s ) 644s raise 644s 644s # If we're going to release the connection in ``finally:``, then 644s # the response doesn't need to know about the connection. Otherwise 644s # it will also try to release it and we'll have a double-release 644s # mess. 644s response_conn = conn if not release_conn else None 644s 644s # Make the request on the HTTPConnection object 644s > response = self._make_request( 644s conn, 644s method, 644s url, 644s timeout=timeout_obj, 644s body=body, 644s headers=headers, 644s chunked=chunked, 644s retries=retries, 644s response_conn=response_conn, 644s preload_content=preload_content, 644s decode_content=decode_content, 644s **response_kw, 644s ) 644s 644s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 644s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 644s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 644s conn.request( 644s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 644s self.endheaders() 644s /usr/lib/python3.12/http/client.py:1331: in endheaders 644s self._send_output(message_body, encode_chunked=encode_chunked) 644s /usr/lib/python3.12/http/client.py:1091: in _send_output 644s self.send(msg) 644s /usr/lib/python3.12/http/client.py:1035: in send 644s self.connect() 644s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 644s self.sock = self._new_conn() 644s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 644s 644s self = 644s 644s def _new_conn(self) -> socket.socket: 644s """Establish a socket connection and set nodelay settings on it. 644s 644s :return: New socket connection. 644s """ 644s try: 644s sock = connection.create_connection( 644s (self._dns_host, self.port), 644s self.timeout, 644s source_address=self.source_address, 644s socket_options=self.socket_options, 644s ) 644s except socket.gaierror as e: 644s raise NameResolutionError(self.host, self, e) from e 644s except SocketTimeout as e: 644s raise ConnectTimeoutError( 644s self, 644s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 644s ) from e 644s 644s except OSError as e: 644s > raise NewConnectionError( 644s self, f"Failed to establish a new connection: {e}" 644s ) from e 644s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 644s 644s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 644s 644s The above exception was the direct cause of the following exception: 644s 644s self = 644s request = , stream = False 644s timeout = Timeout(connect=None, read=None, total=None), verify = True 644s cert = None, proxies = OrderedDict() 644s 644s def send( 644s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 644s ): 644s """Sends PreparedRequest object. Returns Response object. 644s 644s :param request: The :class:`PreparedRequest ` being sent. 644s :param stream: (optional) Whether to stream the request content. 644s :param timeout: (optional) How long to wait for the server to send 644s data before giving up, as a float, or a :ref:`(connect timeout, 644s read timeout) ` tuple. 644s :type timeout: float or tuple or urllib3 Timeout object 644s :param verify: (optional) Either a boolean, in which case it controls whether 644s we verify the server's TLS certificate, or a string, in which case it 644s must be a path to a CA bundle to use 644s :param cert: (optional) Any user-provided SSL certificate to be trusted. 644s :param proxies: (optional) The proxies dictionary to apply to the request. 644s :rtype: requests.Response 644s """ 644s 644s try: 644s conn = self.get_connection(request.url, proxies) 644s except LocationValueError as e: 644s raise InvalidURL(e, request=request) 644s 644s self.cert_verify(conn, request.url, verify, cert) 644s url = self.request_url(request, proxies) 644s self.add_headers( 644s request, 644s stream=stream, 644s timeout=timeout, 644s verify=verify, 644s cert=cert, 644s proxies=proxies, 644s ) 644s 644s chunked = not (request.body is None or "Content-Length" in request.headers) 644s 644s if isinstance(timeout, tuple): 644s try: 644s connect, read = timeout 644s timeout = TimeoutSauce(connect=connect, read=read) 644s except ValueError: 644s raise ValueError( 644s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 644s f"or a single float to set both timeouts to the same value." 644s ) 644s elif isinstance(timeout, TimeoutSauce): 644s pass 644s else: 644s timeout = TimeoutSauce(connect=timeout, read=timeout) 644s 644s try: 644s > resp = conn.urlopen( 644s method=request.method, 644s url=url, 644s body=request.body, 644s headers=request.headers, 644s redirect=False, 644s assert_same_host=False, 644s preload_content=False, 644s decode_content=False, 644s retries=self.max_retries, 644s timeout=timeout, 644s chunked=chunked, 644s ) 644s 644s /usr/lib/python3/dist-packages/requests/adapters.py:486: 644s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 644s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 644s retries = retries.increment( 644s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 644s 644s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 644s method = 'GET', url = '/a%40b/api/contents', response = None 644s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 644s _pool = 644s _stacktrace = 644s 644s def increment( 644s self, 644s method: str | None = None, 644s url: str | None = None, 644s response: BaseHTTPResponse | None = None, 644s error: Exception | None = None, 644s _pool: ConnectionPool | None = None, 644s _stacktrace: TracebackType | None = None, 644s ) -> Retry: 644s """Return a new Retry object with incremented retry counters. 644s 644s :param response: A response object, or None, if the server did not 644s return a response. 644s :type response: :class:`~urllib3.response.BaseHTTPResponse` 644s :param Exception error: An error encountered during the request, or 644s None if the response was received successfully. 644s 644s :return: A new ``Retry`` object. 644s """ 644s if self.total is False and error: 644s # Disabled, indicate to re-raise the error. 644s raise reraise(type(error), error, _stacktrace) 644s 644s total = self.total 644s if total is not None: 644s total -= 1 644s 644s connect = self.connect 644s read = self.read 644s redirect = self.redirect 644s status_count = self.status 644s other = self.other 644s cause = "unknown" 644s status = None 644s redirect_location = None 644s 644s if error and self._is_connection_error(error): 644s # Connect retry? 644s if connect is False: 644s raise reraise(type(error), error, _stacktrace) 644s elif connect is not None: 644s connect -= 1 644s 644s elif error and self._is_read_error(error): 644s # Read retry? 644s if read is False or method is None or not self._is_method_retryable(method): 644s raise reraise(type(error), error, _stacktrace) 644s elif read is not None: 644s read -= 1 644s 644s elif error: 644s # Other retry? 644s if other is not None: 644s other -= 1 644s 644s elif response and response.get_redirect_location(): 644s # Redirect retry? 644s if redirect is not None: 644s redirect -= 1 644s cause = "too many redirects" 644s response_redirect_location = response.get_redirect_location() 644s if response_redirect_location: 644s redirect_location = response_redirect_location 644s status = response.status 644s 644s else: 644s # Incrementing because of a server error like a 500 in 644s # status_forcelist and the given method is in the allowed_methods 644s cause = ResponseError.GENERIC_ERROR 644s if response and response.status: 644s if status_count is not None: 644s status_count -= 1 644s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 644s status = response.status 644s 644s history = self.history + ( 644s RequestHistory(method, url, error, status, redirect_location), 644s ) 644s 644s new_retry = self.new( 644s total=total, 644s connect=connect, 644s read=read, 644s redirect=redirect, 644s status=status_count, 644s other=other, 644s history=history, 644s ) 644s 644s if new_retry.is_exhausted(): 644s reason = error or ResponseError(cause) 644s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 644s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 644s 644s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 644s 644s During handling of the above exception, another exception occurred: 644s 644s cls = 644s 644s @classmethod 644s def wait_until_alive(cls): 644s """Wait for the server to be alive""" 644s url = cls.base_url() + 'api/contents' 644s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 644s try: 644s > cls.fetch_url(url) 644s 644s notebook/tests/launchnotebook.py:53: 644s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 644s notebook/tests/launchnotebook.py:82: in fetch_url 644s return requests.get(url) 644s /usr/lib/python3/dist-packages/requests/api.py:73: in get 644s return request("get", url, params=params, **kwargs) 644s /usr/lib/python3/dist-packages/requests/api.py:59: in request 644s return session.request(method=method, url=url, **kwargs) 644s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 644s resp = self.send(prep, **send_kwargs) 644s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 644s r = adapter.send(request, **kwargs) 644s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 644s 644s self = 644s request = , stream = False 644s timeout = Timeout(connect=None, read=None, total=None), verify = True 644s cert = None, proxies = OrderedDict() 644s 644s def send( 644s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 644s ): 644s """Sends PreparedRequest object. Returns Response object. 644s 644s :param request: The :class:`PreparedRequest ` being sent. 644s :param stream: (optional) Whether to stream the request content. 644s :param timeout: (optional) How long to wait for the server to send 644s data before giving up, as a float, or a :ref:`(connect timeout, 644s read timeout) ` tuple. 644s :type timeout: float or tuple or urllib3 Timeout object 644s :param verify: (optional) Either a boolean, in which case it controls whether 644s we verify the server's TLS certificate, or a string, in which case it 644s must be a path to a CA bundle to use 644s :param cert: (optional) Any user-provided SSL certificate to be trusted. 644s :param proxies: (optional) The proxies dictionary to apply to the request. 644s :rtype: requests.Response 644s """ 644s 644s try: 644s conn = self.get_connection(request.url, proxies) 644s except LocationValueError as e: 644s raise InvalidURL(e, request=request) 644s 644s self.cert_verify(conn, request.url, verify, cert) 644s url = self.request_url(request, proxies) 644s self.add_headers( 644s request, 644s stream=stream, 644s timeout=timeout, 644s verify=verify, 644s cert=cert, 644s proxies=proxies, 644s ) 644s 644s chunked = not (request.body is None or "Content-Length" in request.headers) 644s 644s if isinstance(timeout, tuple): 644s try: 644s connect, read = timeout 644s timeout = TimeoutSauce(connect=connect, read=read) 644s except ValueError: 644s raise ValueError( 644s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 644s f"or a single float to set both timeouts to the same value." 644s ) 644s elif isinstance(timeout, TimeoutSauce): 644s pass 644s else: 644s timeout = TimeoutSauce(connect=timeout, read=timeout) 644s 644s try: 644s resp = conn.urlopen( 644s method=request.method, 644s url=url, 644s body=request.body, 644s headers=request.headers, 644s redirect=False, 644s assert_same_host=False, 644s preload_content=False, 644s decode_content=False, 644s retries=self.max_retries, 644s timeout=timeout, 644s chunked=chunked, 644s ) 644s 644s except (ProtocolError, OSError) as err: 644s raise ConnectionError(err, request=request) 644s 644s except MaxRetryError as e: 644s if isinstance(e.reason, ConnectTimeoutError): 644s # TODO: Remove this in 3.0.0: see #2811 644s if not isinstance(e.reason, NewConnectionError): 644s raise ConnectTimeout(e, request=request) 644s 644s if isinstance(e.reason, ResponseError): 644s raise RetryError(e, request=request) 644s 644s if isinstance(e.reason, _ProxyError): 644s raise ProxyError(e, request=request) 644s 644s if isinstance(e.reason, _SSLError): 644s # This branch is for urllib3 v1.22 and later. 644s raise SSLError(e, request=request) 644s 644s > raise ConnectionError(e, request=request) 644s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 644s 644s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 644s 644s The above exception was the direct cause of the following exception: 644s 644s cls = 644s 644s @classmethod 644s def setup_class(cls): 644s cls.tmp_dir = TemporaryDirectory() 644s def tmp(*parts): 644s path = os.path.join(cls.tmp_dir.name, *parts) 644s try: 644s os.makedirs(path) 644s except OSError as e: 644s if e.errno != errno.EEXIST: 644s raise 644s return path 644s 644s cls.home_dir = tmp('home') 644s data_dir = cls.data_dir = tmp('data') 644s config_dir = cls.config_dir = tmp('config') 644s runtime_dir = cls.runtime_dir = tmp('runtime') 644s cls.notebook_dir = tmp('notebooks') 644s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 644s cls.env_patch.start() 644s # Patch systemwide & user-wide data & config directories, to isolate 644s # the tests from oddities of the local setup. But leave Python env 644s # locations alone, so data files for e.g. nbconvert are accessible. 644s # If this isolation isn't sufficient, you may need to run the tests in 644s # a virtualenv or conda env. 644s cls.path_patch = patch.multiple( 644s jupyter_core.paths, 644s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 644s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 644s ) 644s cls.path_patch.start() 644s 644s config = cls.config or Config() 644s config.NotebookNotary.db_file = ':memory:' 644s 644s cls.token = hexlify(os.urandom(4)).decode('ascii') 644s 644s started = Event() 644s def start_thread(): 644s try: 644s bind_args = cls.get_bind_args() 644s app = cls.notebook = NotebookApp( 644s port_retries=0, 644s open_browser=False, 644s config_dir=cls.config_dir, 644s data_dir=cls.data_dir, 644s runtime_dir=cls.runtime_dir, 644s notebook_dir=cls.notebook_dir, 644s base_url=cls.url_prefix, 644s config=config, 644s allow_root=True, 644s token=cls.token, 644s **bind_args 644s ) 644s if "asyncio" in sys.modules: 644s app._init_asyncio_patch() 644s import asyncio 644s 644s asyncio.set_event_loop(asyncio.new_event_loop()) 644s # Patch the current loop in order to match production 644s # behavior 644s import nest_asyncio 644s 644s nest_asyncio.apply() 644s # don't register signal handler during tests 644s app.init_signal = lambda : None 644s # clear log handlers and propagate to root for nose to capture it 644s # needs to be redone after initialize, which reconfigures logging 644s app.log.propagate = True 644s app.log.handlers = [] 644s app.initialize(argv=cls.get_argv()) 644s app.log.propagate = True 644s app.log.handlers = [] 644s loop = IOLoop.current() 644s loop.add_callback(started.set) 644s app.start() 644s finally: 644s # set the event, so failure to start doesn't cause a hang 644s started.set() 644s app.session_manager.close() 644s cls.notebook_thread = Thread(target=start_thread) 644s cls.notebook_thread.daemon = True 644s cls.notebook_thread.start() 644s started.wait() 644s > cls.wait_until_alive() 644s 644s notebook/tests/launchnotebook.py:198: 644s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 644s 644s cls = 644s 644s @classmethod 644s def wait_until_alive(cls): 644s """Wait for the server to be alive""" 644s url = cls.base_url() + 'api/contents' 644s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 644s try: 644s cls.fetch_url(url) 644s except ModuleNotFoundError as error: 644s # Errors that should be immediately thrown back to caller 644s raise error 644s except Exception as e: 644s if not cls.notebook_thread.is_alive(): 644s > raise RuntimeError("The notebook server failed to start") from e 644s E RuntimeError: The notebook server failed to start 644s 644s notebook/tests/launchnotebook.py:59: RuntimeError 644s ___________________ ERROR at setup of LoginTest.test_next_ok ___________________ 644s 644s self = 644s 644s def _new_conn(self) -> socket.socket: 644s """Establish a socket connection and set nodelay settings on it. 644s 644s :return: New socket connection. 644s """ 644s try: 644s > sock = connection.create_connection( 644s (self._dns_host, self.port), 644s self.timeout, 644s source_address=self.source_address, 644s socket_options=self.socket_options, 644s ) 644s 644s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 644s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 644s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 644s raise err 644s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 644s 644s address = ('localhost', 12341), timeout = None, source_address = None 644s socket_options = [(6, 1, 1)] 644s 644s def create_connection( 644s address: tuple[str, int], 644s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 644s source_address: tuple[str, int] | None = None, 644s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 644s ) -> socket.socket: 644s """Connect to *address* and return the socket object. 644s 644s Convenience function. Connect to *address* (a 2-tuple ``(host, 644s port)``) and return the socket object. Passing the optional 644s *timeout* parameter will set the timeout on the socket instance 644s before attempting to connect. If no *timeout* is supplied, the 644s global default timeout setting returned by :func:`socket.getdefaulttimeout` 644s is used. If *source_address* is set it must be a tuple of (host, port) 644s for the socket to bind as a source address before making the connection. 644s An host of '' or port 0 tells the OS to use the default. 644s """ 644s 644s host, port = address 644s if host.startswith("["): 644s host = host.strip("[]") 644s err = None 644s 644s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 644s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 644s # The original create_connection function always returns all records. 644s family = allowed_gai_family() 644s 644s try: 644s host.encode("idna") 644s except UnicodeError: 644s raise LocationParseError(f"'{host}', label empty or too long") from None 644s 644s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 644s af, socktype, proto, canonname, sa = res 644s sock = None 644s try: 644s sock = socket.socket(af, socktype, proto) 644s 644s # If provided, set socket level options before connecting. 644s _set_socket_options(sock, socket_options) 644s 644s if timeout is not _DEFAULT_TIMEOUT: 644s sock.settimeout(timeout) 644s if source_address: 644s sock.bind(source_address) 644s > sock.connect(sa) 644s E ConnectionRefusedError: [Errno 111] Connection refused 644s 644s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 644s 644s The above exception was the direct cause of the following exception: 644s 644s self = 644s method = 'GET', url = '/a%40b/api/contents', body = None 644s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 644s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 644s redirect = False, assert_same_host = False 644s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 644s release_conn = False, chunked = False, body_pos = None, preload_content = False 644s decode_content = False, response_kw = {} 644s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 644s destination_scheme = None, conn = None, release_this_conn = True 644s http_tunnel_required = False, err = None, clean_exit = False 644s 644s def urlopen( # type: ignore[override] 644s self, 644s method: str, 644s url: str, 644s body: _TYPE_BODY | None = None, 644s headers: typing.Mapping[str, str] | None = None, 644s retries: Retry | bool | int | None = None, 644s redirect: bool = True, 644s assert_same_host: bool = True, 644s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 644s pool_timeout: int | None = None, 644s release_conn: bool | None = None, 644s chunked: bool = False, 644s body_pos: _TYPE_BODY_POSITION | None = None, 644s preload_content: bool = True, 644s decode_content: bool = True, 644s **response_kw: typing.Any, 644s ) -> BaseHTTPResponse: 644s """ 644s Get a connection from the pool and perform an HTTP request. This is the 644s lowest level call for making a request, so you'll need to specify all 644s the raw details. 644s 644s .. note:: 644s 644s More commonly, it's appropriate to use a convenience method 644s such as :meth:`request`. 644s 644s .. note:: 644s 644s `release_conn` will only behave as expected if 644s `preload_content=False` because we want to make 644s `preload_content=False` the default behaviour someday soon without 644s breaking backwards compatibility. 644s 644s :param method: 644s HTTP request method (such as GET, POST, PUT, etc.) 644s 644s :param url: 644s The URL to perform the request on. 644s 644s :param body: 644s Data to send in the request body, either :class:`str`, :class:`bytes`, 644s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 644s 644s :param headers: 644s Dictionary of custom headers to send, such as User-Agent, 644s If-None-Match, etc. If None, pool headers are used. If provided, 644s these headers completely replace any pool-specific headers. 644s 644s :param retries: 644s Configure the number of retries to allow before raising a 644s :class:`~urllib3.exceptions.MaxRetryError` exception. 644s 644s Pass ``None`` to retry until you receive a response. Pass a 644s :class:`~urllib3.util.retry.Retry` object for fine-grained control 644s over different types of retries. 644s Pass an integer number to retry connection errors that many times, 644s but no other types of errors. Pass zero to never retry. 644s 644s If ``False``, then retries are disabled and any exception is raised 644s immediately. Also, instead of raising a MaxRetryError on redirects, 644s the redirect response will be returned. 644s 644s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 644s 644s :param redirect: 644s If True, automatically handle redirects (status codes 301, 302, 644s 303, 307, 308). Each redirect counts as a retry. Disabling retries 644s will disable redirect, too. 644s 644s :param assert_same_host: 644s If ``True``, will make sure that the host of the pool requests is 644s consistent else will raise HostChangedError. When ``False``, you can 644s use the pool on an HTTP proxy and request foreign hosts. 644s 644s :param timeout: 644s If specified, overrides the default timeout for this one 644s request. It may be a float (in seconds) or an instance of 644s :class:`urllib3.util.Timeout`. 644s 644s :param pool_timeout: 644s If set and the pool is set to block=True, then this method will 644s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 644s connection is available within the time period. 644s 644s :param bool preload_content: 644s If True, the response's body will be preloaded into memory. 644s 644s :param bool decode_content: 644s If True, will attempt to decode the body based on the 644s 'content-encoding' header. 644s 644s :param release_conn: 644s If False, then the urlopen call will not release the connection 644s back into the pool once a response is received (but will release if 644s you read the entire contents of the response such as when 644s `preload_content=True`). This is useful if you're not preloading 644s the response's content immediately. You will need to call 644s ``r.release_conn()`` on the response ``r`` to return the connection 644s back into the pool. If None, it takes the value of ``preload_content`` 644s which defaults to ``True``. 644s 644s :param bool chunked: 644s If True, urllib3 will send the body using chunked transfer 644s encoding. Otherwise, urllib3 will send the body using the standard 644s content-length form. Defaults to False. 644s 644s :param int body_pos: 644s Position to seek to in file-like body in the event of a retry or 644s redirect. Typically this won't need to be set because urllib3 will 644s auto-populate the value when needed. 644s """ 644s parsed_url = parse_url(url) 644s destination_scheme = parsed_url.scheme 644s 644s if headers is None: 644s headers = self.headers 644s 644s if not isinstance(retries, Retry): 644s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 644s 644s if release_conn is None: 644s release_conn = preload_content 644s 644s # Check host 644s if assert_same_host and not self.is_same_host(url): 644s raise HostChangedError(self, url, retries) 644s 644s # Ensure that the URL we're connecting to is properly encoded 644s if url.startswith("/"): 644s url = to_str(_encode_target(url)) 644s else: 644s url = to_str(parsed_url.url) 644s 644s conn = None 644s 644s # Track whether `conn` needs to be released before 644s # returning/raising/recursing. Update this variable if necessary, and 644s # leave `release_conn` constant throughout the function. That way, if 644s # the function recurses, the original value of `release_conn` will be 644s # passed down into the recursive call, and its value will be respected. 644s # 644s # See issue #651 [1] for details. 644s # 644s # [1] 644s release_this_conn = release_conn 644s 644s http_tunnel_required = connection_requires_http_tunnel( 644s self.proxy, self.proxy_config, destination_scheme 644s ) 644s 644s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 644s # have to copy the headers dict so we can safely change it without those 644s # changes being reflected in anyone else's copy. 644s if not http_tunnel_required: 644s headers = headers.copy() # type: ignore[attr-defined] 644s headers.update(self.proxy_headers) # type: ignore[union-attr] 644s 644s # Must keep the exception bound to a separate variable or else Python 3 644s # complains about UnboundLocalError. 644s err = None 644s 644s # Keep track of whether we cleanly exited the except block. This 644s # ensures we do proper cleanup in finally. 644s clean_exit = False 644s 644s # Rewind body position, if needed. Record current position 644s # for future rewinds in the event of a redirect/retry. 644s body_pos = set_file_position(body, body_pos) 644s 644s try: 644s # Request a connection from the queue. 644s timeout_obj = self._get_timeout(timeout) 644s conn = self._get_conn(timeout=pool_timeout) 644s 644s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 644s 644s # Is this a closed/new connection that requires CONNECT tunnelling? 644s if self.proxy is not None and http_tunnel_required and conn.is_closed: 644s try: 644s self._prepare_proxy(conn) 644s except (BaseSSLError, OSError, SocketTimeout) as e: 644s self._raise_timeout( 644s err=e, url=self.proxy.url, timeout_value=conn.timeout 644s ) 644s raise 644s 644s # If we're going to release the connection in ``finally:``, then 644s # the response doesn't need to know about the connection. Otherwise 644s # it will also try to release it and we'll have a double-release 644s # mess. 644s response_conn = conn if not release_conn else None 644s 644s # Make the request on the HTTPConnection object 644s > response = self._make_request( 644s conn, 644s method, 644s url, 644s timeout=timeout_obj, 644s body=body, 644s headers=headers, 644s chunked=chunked, 644s retries=retries, 644s response_conn=response_conn, 644s preload_content=preload_content, 644s decode_content=decode_content, 644s **response_kw, 644s ) 644s 644s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 644s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 644s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 644s conn.request( 644s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 644s self.endheaders() 644s /usr/lib/python3.12/http/client.py:1331: in endheaders 644s self._send_output(message_body, encode_chunked=encode_chunked) 644s /usr/lib/python3.12/http/client.py:1091: in _send_output 644s self.send(msg) 644s /usr/lib/python3.12/http/client.py:1035: in send 644s self.connect() 644s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 644s self.sock = self._new_conn() 644s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 644s 644s self = 644s 644s def _new_conn(self) -> socket.socket: 644s """Establish a socket connection and set nodelay settings on it. 644s 644s :return: New socket connection. 644s """ 644s try: 644s sock = connection.create_connection( 644s (self._dns_host, self.port), 644s self.timeout, 644s source_address=self.source_address, 644s socket_options=self.socket_options, 644s ) 644s except socket.gaierror as e: 644s raise NameResolutionError(self.host, self, e) from e 644s except SocketTimeout as e: 644s raise ConnectTimeoutError( 644s self, 644s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 644s ) from e 644s 644s except OSError as e: 644s > raise NewConnectionError( 644s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s __________ ERROR at setup of BundleAPITest.test_bundler_import_error ___________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s """Make a test notebook. Borrowed from nbconvert test. Assumes the class 645s teardown will clean it up in the end.""" 645s > super().setup_class() 645s 645s notebook/bundler/tests/test_bundler_api.py:27: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____________ ERROR at setup of BundleAPITest.test_bundler_invoke ______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s """Make a test notebook. Borrowed from nbconvert test. Assumes the class 645s teardown will clean it up in the end.""" 645s > super().setup_class() 645s 645s notebook/bundler/tests/test_bundler_api.py:27: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________ ERROR at setup of BundleAPITest.test_bundler_not_enabled ___________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s """Make a test notebook. Borrowed from nbconvert test. Assumes the class 645s teardown will clean it up in the end.""" 645s > super().setup_class() 645s 645s notebook/bundler/tests/test_bundler_api.py:27: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________ ERROR at setup of BundleAPITest.test_missing_bundler_arg ___________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s """Make a test notebook. Borrowed from nbconvert test. Assumes the class 645s teardown will clean it up in the end.""" 645s > super().setup_class() 645s 645s notebook/bundler/tests/test_bundler_api.py:27: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________ ERROR at setup of BundleAPITest.test_notebook_not_found ____________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s """Make a test notebook. Borrowed from nbconvert test. Assumes the class 645s teardown will clean it up in the end.""" 645s > super().setup_class() 645s 645s notebook/bundler/tests/test_bundler_api.py:27: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________________ ERROR at setup of APITest.test_get_spec ____________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s __________________ ERROR at setup of APITest.test_get_status ___________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _______________ ERROR at setup of APITest.test_no_track_activity _______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ____________ ERROR at setup of APITest.test_create_retrieve_config _____________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s __________________ ERROR at setup of APITest.test_get_unknown __________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ____________________ ERROR at setup of APITest.test_modify _____________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s __________________ ERROR at setup of APITest.test_checkpoints __________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________ ERROR at setup of APITest.test_checkpoints_separate_root ___________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____________________ ERROR at setup of APITest.test_copy ______________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________________ ERROR at setup of APITest.test_copy_400_hidden ________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________________ ERROR at setup of APITest.test_copy_copy ___________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _________________ ERROR at setup of APITest.test_copy_dir_400 __________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________________ ERROR at setup of APITest.test_copy_path ___________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _________________ ERROR at setup of APITest.test_copy_put_400 __________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ______________ ERROR at setup of APITest.test_copy_put_400_hidden ______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________________ ERROR at setup of APITest.test_create_untitled ________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ______________ ERROR at setup of APITest.test_create_untitled_txt ______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _______________ ERROR at setup of APITest.test_delete_hidden_dir _______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ______________ ERROR at setup of APITest.test_delete_hidden_file _______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _______________ ERROR at setup of APITest.test_file_checkpoints ________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________________ ERROR at setup of APITest.test_get_404_hidden _________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _________________ ERROR at setup of APITest.test_get_bad_type __________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________ ERROR at setup of APITest.test_get_binary_file_contents ____________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________ ERROR at setup of APITest.test_get_contents_no_such_file ___________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ______________ ERROR at setup of APITest.test_get_dir_no_content _______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________________ ERROR at setup of APITest.test_get_nb_contents ________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________________ ERROR at setup of APITest.test_get_nb_invalid _________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _______________ ERROR at setup of APITest.test_get_nb_no_content _______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ____________ ERROR at setup of APITest.test_get_text_file_contents _____________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________________ ERROR at setup of APITest.test_list_dirs ___________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____________ ERROR at setup of APITest.test_list_nonexistant_dir ______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________________ ERROR at setup of APITest.test_list_notebooks _________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____________________ ERROR at setup of APITest.test_mkdir _____________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _______________ ERROR at setup of APITest.test_mkdir_hidden_400 ________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________________ ERROR at setup of APITest.test_mkdir_untitled _________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ____________________ ERROR at setup of APITest.test_rename _____________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _______________ ERROR at setup of APITest.test_rename_400_hidden _______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________________ ERROR at setup of APITest.test_rename_existing ________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____________________ ERROR at setup of APITest.test_save ______________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ____________________ ERROR at setup of APITest.test_upload _____________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s __________________ ERROR at setup of APITest.test_upload_b64 ___________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s __________________ ERROR at setup of APITest.test_upload_txt ___________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _______________ ERROR at setup of APITest.test_upload_txt_hidden _______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________________ ERROR at setup of APITest.test_upload_v2 ___________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _______ ERROR at setup of GenericFileCheckpointsAPITest.test_checkpoints _______ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _ ERROR at setup of GenericFileCheckpointsAPITest.test_checkpoints_separate_root _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s __ ERROR at setup of GenericFileCheckpointsAPITest.test_config_did_something ___ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s __________ ERROR at setup of GenericFileCheckpointsAPITest.test_copy ___________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_copy_400_hidden _____ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________ ERROR at setup of GenericFileCheckpointsAPITest.test_copy_copy ________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ______ ERROR at setup of GenericFileCheckpointsAPITest.test_copy_dir_400 _______ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________ ERROR at setup of GenericFileCheckpointsAPITest.test_copy_path ________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ______ ERROR at setup of GenericFileCheckpointsAPITest.test_copy_put_400 _______ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___ ERROR at setup of GenericFileCheckpointsAPITest.test_copy_put_400_hidden ___ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_create_untitled _____ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___ ERROR at setup of GenericFileCheckpointsAPITest.test_create_untitled_txt ___ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ____ ERROR at setup of GenericFileCheckpointsAPITest.test_delete_hidden_dir ____ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___ ERROR at setup of GenericFileCheckpointsAPITest.test_delete_hidden_file ____ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ____ ERROR at setup of GenericFileCheckpointsAPITest.test_file_checkpoints _____ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_get_404_hidden ______ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ______ ERROR at setup of GenericFileCheckpointsAPITest.test_get_bad_type _______ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _ ERROR at setup of GenericFileCheckpointsAPITest.test_get_binary_file_contents _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _ ERROR at setup of GenericFileCheckpointsAPITest.test_get_contents_no_such_file _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___ ERROR at setup of GenericFileCheckpointsAPITest.test_get_dir_no_content ____ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_get_nb_contents _____ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_get_nb_invalid ______ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ____ ERROR at setup of GenericFileCheckpointsAPITest.test_get_nb_no_content ____ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _ ERROR at setup of GenericFileCheckpointsAPITest.test_get_text_file_contents __ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________ ERROR at setup of GenericFileCheckpointsAPITest.test_list_dirs ________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s __ ERROR at setup of GenericFileCheckpointsAPITest.test_list_nonexistant_dir ___ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_list_notebooks ______ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s __________ ERROR at setup of GenericFileCheckpointsAPITest.test_mkdir __________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ____ ERROR at setup of GenericFileCheckpointsAPITest.test_mkdir_hidden_400 _____ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_mkdir_untitled ______ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _________ ERROR at setup of GenericFileCheckpointsAPITest.test_rename __________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ____ ERROR at setup of GenericFileCheckpointsAPITest.test_rename_400_hidden ____ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____ ERROR at setup of GenericFileCheckpointsAPITest.test_rename_existing _____ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s __________ ERROR at setup of GenericFileCheckpointsAPITest.test_save ___________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _________ ERROR at setup of GenericFileCheckpointsAPITest.test_upload __________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _______ ERROR at setup of GenericFileCheckpointsAPITest.test_upload_b64 ________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _______ ERROR at setup of GenericFileCheckpointsAPITest.test_upload_txt ________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ____ ERROR at setup of GenericFileCheckpointsAPITest.test_upload_txt_hidden ____ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________ ERROR at setup of GenericFileCheckpointsAPITest.test_upload_v2 ________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _______________ ERROR at setup of KernelAPITest.test_connections _______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____________ ERROR at setup of KernelAPITest.test_default_kernel ______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____________ ERROR at setup of KernelAPITest.test_kernel_handler ______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________ ERROR at setup of KernelAPITest.test_main_kernel_handler ___________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _______________ ERROR at setup of KernelAPITest.test_no_kernels ________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ____________ ERROR at setup of AsyncKernelAPITest.test_connections _____________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 645s raise SkipTest("AsyncKernelAPITest tests skipped due to down-level jupyter_client!") 645s > super().setup_class() 645s 645s notebook/services/kernels/tests/test_kernels_api.py:206: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________ ERROR at setup of AsyncKernelAPITest.test_default_kernel ___________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 645s raise SkipTest("AsyncKernelAPITest tests skipped due to down-level jupyter_client!") 645s > super().setup_class() 645s 645s notebook/services/kernels/tests/test_kernels_api.py:206: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________ ERROR at setup of AsyncKernelAPITest.test_kernel_handler ___________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 645s raise SkipTest("AsyncKernelAPITest tests skipped due to down-level jupyter_client!") 645s > super().setup_class() 645s 645s notebook/services/kernels/tests/test_kernels_api.py:206: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________ ERROR at setup of AsyncKernelAPITest.test_main_kernel_handler _________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 645s raise SkipTest("AsyncKernelAPITest tests skipped due to down-level jupyter_client!") 645s > super().setup_class() 645s 645s notebook/services/kernels/tests/test_kernels_api.py:206: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____________ ERROR at setup of AsyncKernelAPITest.test_no_kernels _____________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 645s raise SkipTest("AsyncKernelAPITest tests skipped due to down-level jupyter_client!") 645s > super().setup_class() 645s 645s notebook/services/kernels/tests/test_kernels_api.py:206: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________________ ERROR at setup of KernelFilterTest.test_config ________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _______________ ERROR at setup of KernelCullingTest.test_culling _______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________ ERROR at setup of APITest.test_get_kernel_resource_file ____________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________________ ERROR at setup of APITest.test_get_kernelspec _________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____________ ERROR at setup of APITest.test_get_kernelspec_spaces _____________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s __________ ERROR at setup of APITest.test_get_nonexistant_kernelspec ___________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________ ERROR at setup of APITest.test_get_nonexistant_resource ____________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _______________ ERROR at setup of APITest.test_list_kernelspecs ________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____________ ERROR at setup of APITest.test_list_kernelspecs_bad ______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _________________ ERROR at setup of APITest.test_list_formats __________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _________________ ERROR at setup of SessionAPITest.test_create _________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _________ ERROR at setup of SessionAPITest.test_create_console_session _________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________ ERROR at setup of SessionAPITest.test_create_deprecated ____________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s __________ ERROR at setup of SessionAPITest.test_create_file_session ___________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _________ ERROR at setup of SessionAPITest.test_create_with_kernel_id __________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _________________ ERROR at setup of SessionAPITest.test_delete _________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ____________ ERROR at setup of SessionAPITest.test_modify_kernel_id ____________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________ ERROR at setup of SessionAPITest.test_modify_kernel_name ___________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ______________ ERROR at setup of SessionAPITest.test_modify_path _______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _________ ERROR at setup of SessionAPITest.test_modify_path_deprecated _________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ______________ ERROR at setup of SessionAPITest.test_modify_type _______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ______________ ERROR at setup of AsyncSessionAPITest.test_create _______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 645s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 645s > super().setup_class() 645s 645s notebook/services/sessions/tests/test_sessions_api.py:274: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ______ ERROR at setup of AsyncSessionAPITest.test_create_console_session _______ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 645s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 645s > super().setup_class() 645s 645s notebook/services/sessions/tests/test_sessions_api.py:274: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _________ ERROR at setup of AsyncSessionAPITest.test_create_deprecated _________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 645s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 645s > super().setup_class() 645s 645s notebook/services/sessions/tests/test_sessions_api.py:274: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________ ERROR at setup of AsyncSessionAPITest.test_create_file_session ________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 645s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 645s > super().setup_class() 645s 645s notebook/services/sessions/tests/test_sessions_api.py:274: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _______ ERROR at setup of AsyncSessionAPITest.test_create_with_kernel_id _______ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 645s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 645s > super().setup_class() 645s 645s notebook/services/sessions/tests/test_sessions_api.py:274: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ______________ ERROR at setup of AsyncSessionAPITest.test_delete _______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 645s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 645s > super().setup_class() 645s 645s notebook/services/sessions/tests/test_sessions_api.py:274: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _________ ERROR at setup of AsyncSessionAPITest.test_modify_kernel_id __________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 645s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 645s > super().setup_class() 645s 645s notebook/services/sessions/tests/test_sessions_api.py:274: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________ ERROR at setup of AsyncSessionAPITest.test_modify_kernel_name _________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 645s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 645s > super().setup_class() 645s 645s notebook/services/sessions/tests/test_sessions_api.py:274: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ____________ ERROR at setup of AsyncSessionAPITest.test_modify_path ____________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 645s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 645s > super().setup_class() 645s 645s notebook/services/sessions/tests/test_sessions_api.py:274: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ______ ERROR at setup of AsyncSessionAPITest.test_modify_path_deprecated _______ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 645s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 645s > super().setup_class() 645s 645s notebook/services/sessions/tests/test_sessions_api.py:274: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ____________ ERROR at setup of AsyncSessionAPITest.test_modify_type ____________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s if not async_testing_enabled: # Can be removed once jupyter_client >= 6.1 is required. 645s raise SkipTest("AsyncSessionAPITest tests skipped due to down-level jupyter_client!") 645s > super().setup_class() 645s 645s notebook/services/sessions/tests/test_sessions_api.py:274: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ____________ ERROR at setup of TerminalAPITest.test_create_terminal ____________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________ ERROR at setup of TerminalAPITest.test_create_terminal_via_get ________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _______ ERROR at setup of TerminalAPITest.test_create_terminal_with_name _______ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____________ ERROR at setup of TerminalAPITest.test_no_terminals ______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________ ERROR at setup of TerminalAPITest.test_terminal_handler ____________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _________ ERROR at setup of TerminalAPITest.test_terminal_root_handler _________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ______________ ERROR at setup of TerminalCullingTest.test_config _______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ______________ ERROR at setup of TerminalCullingTest.test_culling ______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ______________ ERROR at setup of FilesTest.test_contents_manager _______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s __________________ ERROR at setup of FilesTest.test_download ___________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ________________ ERROR at setup of FilesTest.test_hidden_files _________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____________ ERROR at setup of FilesTest.test_old_files_redirect ______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s __________________ ERROR at setup of FilesTest.test_view_html __________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s __________ ERROR at setup of TestGateway.test_gateway_class_mappings ___________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s GatewayClient.clear_instance() 645s > super().setup_class() 645s 645s notebook/tests/test_gateway.py:138: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s __________ ERROR at setup of TestGateway.test_gateway_get_kernelspecs __________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s GatewayClient.clear_instance() 645s > super().setup_class() 645s 645s notebook/tests/test_gateway.py:138: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _______ ERROR at setup of TestGateway.test_gateway_get_named_kernelspec ________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s GatewayClient.clear_instance() 645s > super().setup_class() 645s 645s notebook/tests/test_gateway.py:138: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _________ ERROR at setup of TestGateway.test_gateway_kernel_lifecycle __________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s GatewayClient.clear_instance() 645s > super().setup_class() 645s 645s notebook/tests/test_gateway.py:138: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ______________ ERROR at setup of TestGateway.test_gateway_options ______________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s GatewayClient.clear_instance() 645s > super().setup_class() 645s 645s notebook/tests/test_gateway.py:138: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _________ ERROR at setup of TestGateway.test_gateway_session_lifecycle _________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s GatewayClient.clear_instance() 645s > super().setup_class() 645s 645s notebook/tests/test_gateway.py:138: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _________ ERROR at setup of NotebookAppTests.test_list_running_servers _________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________ ERROR at setup of NotebookAppTests.test_log_json_default ___________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s __________ ERROR at setup of NotebookAppTests.test_validate_log_json ___________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___ ERROR at setup of NotebookUnixSocketTests.test_list_running_sock_servers ___ 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def connect(self): 645s sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) 645s sock.settimeout(self.timeout) 645s socket_path = unquote(urlparse(self.unix_socket_url).netloc) 645s > sock.connect(socket_path) 645s E FileNotFoundError: [Errno 2] No such file or directory 645s 645s /usr/lib/python3/dist-packages/requests_unixsocket/adapters.py:36: FileNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None 645s proxies = OrderedDict({'no': '127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,p...,objectstorage.prodstack5.canonical.com', 'https': 'http://squid.internal:3128', 'http': 'http://squid.internal:3128'}) 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:470: in increment 645s raise reraise(type(error), error, _stacktrace) 645s /usr/lib/python3/dist-packages/urllib3/util/util.py:38: in reraise 645s raise value.with_traceback(tb) 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: in urlopen 645s response = self._make_request( 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def connect(self): 645s sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) 645s sock.settimeout(self.timeout) 645s socket_path = unquote(urlparse(self.unix_socket_url).netloc) 645s > sock.connect(socket_path) 645s E urllib3.exceptions.ProtocolError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) 645s 645s /usr/lib/python3/dist-packages/requests_unixsocket/adapters.py:36: ProtocolError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:242: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests_unixsocket/__init__.py:51: in get 645s return request('get', url, **kwargs) 645s /usr/lib/python3/dist-packages/requests_unixsocket/__init__.py:46: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None 645s proxies = OrderedDict({'no': '127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,p...,objectstorage.prodstack5.canonical.com', 'https': 'http://squid.internal:3128', 'http': 'http://squid.internal:3128'}) 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s > raise ConnectionError(err, request=request) 645s E requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:501: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ______________ ERROR at setup of NotebookUnixSocketTests.test_run ______________ 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def connect(self): 645s sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) 645s sock.settimeout(self.timeout) 645s socket_path = unquote(urlparse(self.unix_socket_url).netloc) 645s > sock.connect(socket_path) 645s E FileNotFoundError: [Errno 2] No such file or directory 645s 645s /usr/lib/python3/dist-packages/requests_unixsocket/adapters.py:36: FileNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None 645s proxies = OrderedDict({'no': '127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,p...,objectstorage.prodstack5.canonical.com', 'https': 'http://squid.internal:3128', 'http': 'http://squid.internal:3128'}) 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:470: in increment 645s raise reraise(type(error), error, _stacktrace) 645s /usr/lib/python3/dist-packages/urllib3/util/util.py:38: in reraise 645s raise value.with_traceback(tb) 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: in urlopen 645s response = self._make_request( 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def connect(self): 645s sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) 645s sock.settimeout(self.timeout) 645s socket_path = unquote(urlparse(self.unix_socket_url).netloc) 645s > sock.connect(socket_path) 645s E urllib3.exceptions.ProtocolError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) 645s 645s /usr/lib/python3/dist-packages/requests_unixsocket/adapters.py:36: ProtocolError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:242: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests_unixsocket/__init__.py:51: in get 645s return request('get', url, **kwargs) 645s /usr/lib/python3/dist-packages/requests_unixsocket/__init__.py:46: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None 645s proxies = OrderedDict({'no': '127.0.0.1,127.0.1.1,login.ubuntu.com,localhost,localdomain,novalocal,internal,archive.ubuntu.com,p...,objectstorage.prodstack5.canonical.com', 'https': 'http://squid.internal:3128', 'http': 'http://squid.internal:3128'}) 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s > raise ConnectionError(err, request=request) 645s E requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:501: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____ ERROR at setup of NotebookAppJSONLoggingTests.test_log_json_enabled ______ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s > super().setup_class() 645s 645s notebook/tests/test_notebookapp.py:212: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s _____ ERROR at setup of NotebookAppJSONLoggingTests.test_validate_log_json _____ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s > super().setup_class() 645s 645s notebook/tests/test_notebookapp.py:212: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:198: in setup_class 645s cls.wait_until_alive() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ____________ ERROR at setup of RedirectTestCase.test_trailing_slash ____________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s ___________________ ERROR at setup of TreeTest.test_redirect ___________________ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s > sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:203: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:85: in create_connection 645s raise err 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s address = ('localhost', 12341), timeout = None, source_address = None 645s socket_options = [(6, 1, 1)] 645s 645s def create_connection( 645s address: tuple[str, int], 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s source_address: tuple[str, int] | None = None, 645s socket_options: _TYPE_SOCKET_OPTIONS | None = None, 645s ) -> socket.socket: 645s """Connect to *address* and return the socket object. 645s 645s Convenience function. Connect to *address* (a 2-tuple ``(host, 645s port)``) and return the socket object. Passing the optional 645s *timeout* parameter will set the timeout on the socket instance 645s before attempting to connect. If no *timeout* is supplied, the 645s global default timeout setting returned by :func:`socket.getdefaulttimeout` 645s is used. If *source_address* is set it must be a tuple of (host, port) 645s for the socket to bind as a source address before making the connection. 645s An host of '' or port 0 tells the OS to use the default. 645s """ 645s 645s host, port = address 645s if host.startswith("["): 645s host = host.strip("[]") 645s err = None 645s 645s # Using the value from allowed_gai_family() in the context of getaddrinfo lets 645s # us select whether to work with IPv4 DNS records, IPv6 records, or both. 645s # The original create_connection function always returns all records. 645s family = allowed_gai_family() 645s 645s try: 645s host.encode("idna") 645s except UnicodeError: 645s raise LocationParseError(f"'{host}', label empty or too long") from None 645s 645s for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 645s af, socktype, proto, canonname, sa = res 645s sock = None 645s try: 645s sock = socket.socket(af, socktype, proto) 645s 645s # If provided, set socket level options before connecting. 645s _set_socket_options(sock, socket_options) 645s 645s if timeout is not _DEFAULT_TIMEOUT: 645s sock.settimeout(timeout) 645s if source_address: 645s sock.bind(source_address) 645s > sock.connect(sa) 645s E ConnectionRefusedError: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/util/connection.py:73: ConnectionRefusedError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s method = 'GET', url = '/a%40b/api/contents', body = None 645s headers = {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} 645s retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s redirect = False, assert_same_host = False 645s timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None 645s release_conn = False, chunked = False, body_pos = None, preload_content = False 645s decode_content = False, response_kw = {} 645s parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/a%40b/api/contents', query=None, fragment=None) 645s destination_scheme = None, conn = None, release_this_conn = True 645s http_tunnel_required = False, err = None, clean_exit = False 645s 645s def urlopen( # type: ignore[override] 645s self, 645s method: str, 645s url: str, 645s body: _TYPE_BODY | None = None, 645s headers: typing.Mapping[str, str] | None = None, 645s retries: Retry | bool | int | None = None, 645s redirect: bool = True, 645s assert_same_host: bool = True, 645s timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, 645s pool_timeout: int | None = None, 645s release_conn: bool | None = None, 645s chunked: bool = False, 645s body_pos: _TYPE_BODY_POSITION | None = None, 645s preload_content: bool = True, 645s decode_content: bool = True, 645s **response_kw: typing.Any, 645s ) -> BaseHTTPResponse: 645s """ 645s Get a connection from the pool and perform an HTTP request. This is the 645s lowest level call for making a request, so you'll need to specify all 645s the raw details. 645s 645s .. note:: 645s 645s More commonly, it's appropriate to use a convenience method 645s such as :meth:`request`. 645s 645s .. note:: 645s 645s `release_conn` will only behave as expected if 645s `preload_content=False` because we want to make 645s `preload_content=False` the default behaviour someday soon without 645s breaking backwards compatibility. 645s 645s :param method: 645s HTTP request method (such as GET, POST, PUT, etc.) 645s 645s :param url: 645s The URL to perform the request on. 645s 645s :param body: 645s Data to send in the request body, either :class:`str`, :class:`bytes`, 645s an iterable of :class:`str`/:class:`bytes`, or a file-like object. 645s 645s :param headers: 645s Dictionary of custom headers to send, such as User-Agent, 645s If-None-Match, etc. If None, pool headers are used. If provided, 645s these headers completely replace any pool-specific headers. 645s 645s :param retries: 645s Configure the number of retries to allow before raising a 645s :class:`~urllib3.exceptions.MaxRetryError` exception. 645s 645s Pass ``None`` to retry until you receive a response. Pass a 645s :class:`~urllib3.util.retry.Retry` object for fine-grained control 645s over different types of retries. 645s Pass an integer number to retry connection errors that many times, 645s but no other types of errors. Pass zero to never retry. 645s 645s If ``False``, then retries are disabled and any exception is raised 645s immediately. Also, instead of raising a MaxRetryError on redirects, 645s the redirect response will be returned. 645s 645s :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. 645s 645s :param redirect: 645s If True, automatically handle redirects (status codes 301, 302, 645s 303, 307, 308). Each redirect counts as a retry. Disabling retries 645s will disable redirect, too. 645s 645s :param assert_same_host: 645s If ``True``, will make sure that the host of the pool requests is 645s consistent else will raise HostChangedError. When ``False``, you can 645s use the pool on an HTTP proxy and request foreign hosts. 645s 645s :param timeout: 645s If specified, overrides the default timeout for this one 645s request. It may be a float (in seconds) or an instance of 645s :class:`urllib3.util.Timeout`. 645s 645s :param pool_timeout: 645s If set and the pool is set to block=True, then this method will 645s block for ``pool_timeout`` seconds and raise EmptyPoolError if no 645s connection is available within the time period. 645s 645s :param bool preload_content: 645s If True, the response's body will be preloaded into memory. 645s 645s :param bool decode_content: 645s If True, will attempt to decode the body based on the 645s 'content-encoding' header. 645s 645s :param release_conn: 645s If False, then the urlopen call will not release the connection 645s back into the pool once a response is received (but will release if 645s you read the entire contents of the response such as when 645s `preload_content=True`). This is useful if you're not preloading 645s the response's content immediately. You will need to call 645s ``r.release_conn()`` on the response ``r`` to return the connection 645s back into the pool. If None, it takes the value of ``preload_content`` 645s which defaults to ``True``. 645s 645s :param bool chunked: 645s If True, urllib3 will send the body using chunked transfer 645s encoding. Otherwise, urllib3 will send the body using the standard 645s content-length form. Defaults to False. 645s 645s :param int body_pos: 645s Position to seek to in file-like body in the event of a retry or 645s redirect. Typically this won't need to be set because urllib3 will 645s auto-populate the value when needed. 645s """ 645s parsed_url = parse_url(url) 645s destination_scheme = parsed_url.scheme 645s 645s if headers is None: 645s headers = self.headers 645s 645s if not isinstance(retries, Retry): 645s retries = Retry.from_int(retries, redirect=redirect, default=self.retries) 645s 645s if release_conn is None: 645s release_conn = preload_content 645s 645s # Check host 645s if assert_same_host and not self.is_same_host(url): 645s raise HostChangedError(self, url, retries) 645s 645s # Ensure that the URL we're connecting to is properly encoded 645s if url.startswith("/"): 645s url = to_str(_encode_target(url)) 645s else: 645s url = to_str(parsed_url.url) 645s 645s conn = None 645s 645s # Track whether `conn` needs to be released before 645s # returning/raising/recursing. Update this variable if necessary, and 645s # leave `release_conn` constant throughout the function. That way, if 645s # the function recurses, the original value of `release_conn` will be 645s # passed down into the recursive call, and its value will be respected. 645s # 645s # See issue #651 [1] for details. 645s # 645s # [1] 645s release_this_conn = release_conn 645s 645s http_tunnel_required = connection_requires_http_tunnel( 645s self.proxy, self.proxy_config, destination_scheme 645s ) 645s 645s # Merge the proxy headers. Only done when not using HTTP CONNECT. We 645s # have to copy the headers dict so we can safely change it without those 645s # changes being reflected in anyone else's copy. 645s if not http_tunnel_required: 645s headers = headers.copy() # type: ignore[attr-defined] 645s headers.update(self.proxy_headers) # type: ignore[union-attr] 645s 645s # Must keep the exception bound to a separate variable or else Python 3 645s # complains about UnboundLocalError. 645s err = None 645s 645s # Keep track of whether we cleanly exited the except block. This 645s # ensures we do proper cleanup in finally. 645s clean_exit = False 645s 645s # Rewind body position, if needed. Record current position 645s # for future rewinds in the event of a redirect/retry. 645s body_pos = set_file_position(body, body_pos) 645s 645s try: 645s # Request a connection from the queue. 645s timeout_obj = self._get_timeout(timeout) 645s conn = self._get_conn(timeout=pool_timeout) 645s 645s conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] 645s 645s # Is this a closed/new connection that requires CONNECT tunnelling? 645s if self.proxy is not None and http_tunnel_required and conn.is_closed: 645s try: 645s self._prepare_proxy(conn) 645s except (BaseSSLError, OSError, SocketTimeout) as e: 645s self._raise_timeout( 645s err=e, url=self.proxy.url, timeout_value=conn.timeout 645s ) 645s raise 645s 645s # If we're going to release the connection in ``finally:``, then 645s # the response doesn't need to know about the connection. Otherwise 645s # it will also try to release it and we'll have a double-release 645s # mess. 645s response_conn = conn if not release_conn else None 645s 645s # Make the request on the HTTPConnection object 645s > response = self._make_request( 645s conn, 645s method, 645s url, 645s timeout=timeout_obj, 645s body=body, 645s headers=headers, 645s chunked=chunked, 645s retries=retries, 645s response_conn=response_conn, 645s preload_content=preload_content, 645s decode_content=decode_content, 645s **response_kw, 645s ) 645s 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:791: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:497: in _make_request 645s conn.request( 645s /usr/lib/python3/dist-packages/urllib3/connection.py:395: in request 645s self.endheaders() 645s /usr/lib/python3.12/http/client.py:1331: in endheaders 645s self._send_output(message_body, encode_chunked=encode_chunked) 645s /usr/lib/python3.12/http/client.py:1091: in _send_output 645s self.send(msg) 645s /usr/lib/python3.12/http/client.py:1035: in send 645s self.connect() 645s /usr/lib/python3/dist-packages/urllib3/connection.py:243: in connect 645s self.sock = self._new_conn() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _new_conn(self) -> socket.socket: 645s """Establish a socket connection and set nodelay settings on it. 645s 645s :return: New socket connection. 645s """ 645s try: 645s sock = connection.create_connection( 645s (self._dns_host, self.port), 645s self.timeout, 645s source_address=self.source_address, 645s socket_options=self.socket_options, 645s ) 645s except socket.gaierror as e: 645s raise NameResolutionError(self.host, self, e) from e 645s except SocketTimeout as e: 645s raise ConnectTimeoutError( 645s self, 645s f"Connection to {self.host} timed out. (connect timeout={self.timeout})", 645s ) from e 645s 645s except OSError as e: 645s > raise NewConnectionError( 645s self, f"Failed to establish a new connection: {e}" 645s ) from e 645s E urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused 645s 645s /usr/lib/python3/dist-packages/urllib3/connection.py:218: NewConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s > resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:486: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/urllib3/connectionpool.py:845: in urlopen 645s retries = retries.increment( 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = Retry(total=0, connect=None, read=False, redirect=None, status=None) 645s method = 'GET', url = '/a%40b/api/contents', response = None 645s error = NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused') 645s _pool = 645s _stacktrace = 645s 645s def increment( 645s self, 645s method: str | None = None, 645s url: str | None = None, 645s response: BaseHTTPResponse | None = None, 645s error: Exception | None = None, 645s _pool: ConnectionPool | None = None, 645s _stacktrace: TracebackType | None = None, 645s ) -> Retry: 645s """Return a new Retry object with incremented retry counters. 645s 645s :param response: A response object, or None, if the server did not 645s return a response. 645s :type response: :class:`~urllib3.response.BaseHTTPResponse` 645s :param Exception error: An error encountered during the request, or 645s None if the response was received successfully. 645s 645s :return: A new ``Retry`` object. 645s """ 645s if self.total is False and error: 645s # Disabled, indicate to re-raise the error. 645s raise reraise(type(error), error, _stacktrace) 645s 645s total = self.total 645s if total is not None: 645s total -= 1 645s 645s connect = self.connect 645s read = self.read 645s redirect = self.redirect 645s status_count = self.status 645s other = self.other 645s cause = "unknown" 645s status = None 645s redirect_location = None 645s 645s if error and self._is_connection_error(error): 645s # Connect retry? 645s if connect is False: 645s raise reraise(type(error), error, _stacktrace) 645s elif connect is not None: 645s connect -= 1 645s 645s elif error and self._is_read_error(error): 645s # Read retry? 645s if read is False or method is None or not self._is_method_retryable(method): 645s raise reraise(type(error), error, _stacktrace) 645s elif read is not None: 645s read -= 1 645s 645s elif error: 645s # Other retry? 645s if other is not None: 645s other -= 1 645s 645s elif response and response.get_redirect_location(): 645s # Redirect retry? 645s if redirect is not None: 645s redirect -= 1 645s cause = "too many redirects" 645s response_redirect_location = response.get_redirect_location() 645s if response_redirect_location: 645s redirect_location = response_redirect_location 645s status = response.status 645s 645s else: 645s # Incrementing because of a server error like a 500 in 645s # status_forcelist and the given method is in the allowed_methods 645s cause = ResponseError.GENERIC_ERROR 645s if response and response.status: 645s if status_count is not None: 645s status_count -= 1 645s cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) 645s status = response.status 645s 645s history = self.history + ( 645s RequestHistory(method, url, error, status, redirect_location), 645s ) 645s 645s new_retry = self.new( 645s total=total, 645s connect=connect, 645s read=read, 645s redirect=redirect, 645s status=status_count, 645s other=other, 645s history=history, 645s ) 645s 645s if new_retry.is_exhausted(): 645s reason = error or ResponseError(cause) 645s > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 645s E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/urllib3/util/retry.py:515: MaxRetryError 645s 645s During handling of the above exception, another exception occurred: 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s > cls.fetch_url(url) 645s 645s notebook/tests/launchnotebook.py:53: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s notebook/tests/launchnotebook.py:82: in fetch_url 645s return requests.get(url) 645s /usr/lib/python3/dist-packages/requests/api.py:73: in get 645s return request("get", url, params=params, **kwargs) 645s /usr/lib/python3/dist-packages/requests/api.py:59: in request 645s return session.request(method=method, url=url, **kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:589: in request 645s resp = self.send(prep, **send_kwargs) 645s /usr/lib/python3/dist-packages/requests/sessions.py:703: in send 645s r = adapter.send(request, **kwargs) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s request = , stream = False 645s timeout = Timeout(connect=None, read=None, total=None), verify = True 645s cert = None, proxies = OrderedDict() 645s 645s def send( 645s self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None 645s ): 645s """Sends PreparedRequest object. Returns Response object. 645s 645s :param request: The :class:`PreparedRequest ` being sent. 645s :param stream: (optional) Whether to stream the request content. 645s :param timeout: (optional) How long to wait for the server to send 645s data before giving up, as a float, or a :ref:`(connect timeout, 645s read timeout) ` tuple. 645s :type timeout: float or tuple or urllib3 Timeout object 645s :param verify: (optional) Either a boolean, in which case it controls whether 645s we verify the server's TLS certificate, or a string, in which case it 645s must be a path to a CA bundle to use 645s :param cert: (optional) Any user-provided SSL certificate to be trusted. 645s :param proxies: (optional) The proxies dictionary to apply to the request. 645s :rtype: requests.Response 645s """ 645s 645s try: 645s conn = self.get_connection(request.url, proxies) 645s except LocationValueError as e: 645s raise InvalidURL(e, request=request) 645s 645s self.cert_verify(conn, request.url, verify, cert) 645s url = self.request_url(request, proxies) 645s self.add_headers( 645s request, 645s stream=stream, 645s timeout=timeout, 645s verify=verify, 645s cert=cert, 645s proxies=proxies, 645s ) 645s 645s chunked = not (request.body is None or "Content-Length" in request.headers) 645s 645s if isinstance(timeout, tuple): 645s try: 645s connect, read = timeout 645s timeout = TimeoutSauce(connect=connect, read=read) 645s except ValueError: 645s raise ValueError( 645s f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " 645s f"or a single float to set both timeouts to the same value." 645s ) 645s elif isinstance(timeout, TimeoutSauce): 645s pass 645s else: 645s timeout = TimeoutSauce(connect=timeout, read=timeout) 645s 645s try: 645s resp = conn.urlopen( 645s method=request.method, 645s url=url, 645s body=request.body, 645s headers=request.headers, 645s redirect=False, 645s assert_same_host=False, 645s preload_content=False, 645s decode_content=False, 645s retries=self.max_retries, 645s timeout=timeout, 645s chunked=chunked, 645s ) 645s 645s except (ProtocolError, OSError) as err: 645s raise ConnectionError(err, request=request) 645s 645s except MaxRetryError as e: 645s if isinstance(e.reason, ConnectTimeoutError): 645s # TODO: Remove this in 3.0.0: see #2811 645s if not isinstance(e.reason, NewConnectionError): 645s raise ConnectTimeout(e, request=request) 645s 645s if isinstance(e.reason, ResponseError): 645s raise RetryError(e, request=request) 645s 645s if isinstance(e.reason, _ProxyError): 645s raise ProxyError(e, request=request) 645s 645s if isinstance(e.reason, _SSLError): 645s # This branch is for urllib3 v1.22 and later. 645s raise SSLError(e, request=request) 645s 645s > raise ConnectionError(e, request=request) 645s E requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=12341): Max retries exceeded with url: /a%40b/api/contents (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) 645s 645s /usr/lib/python3/dist-packages/requests/adapters.py:519: ConnectionError 645s 645s The above exception was the direct cause of the following exception: 645s 645s cls = 645s 645s @classmethod 645s def setup_class(cls): 645s cls.tmp_dir = TemporaryDirectory() 645s def tmp(*parts): 645s path = os.path.join(cls.tmp_dir.name, *parts) 645s try: 645s os.makedirs(path) 645s except OSError as e: 645s if e.errno != errno.EEXIST: 645s raise 645s return path 645s 645s cls.home_dir = tmp('home') 645s data_dir = cls.data_dir = tmp('data') 645s config_dir = cls.config_dir = tmp('config') 645s runtime_dir = cls.runtime_dir = tmp('runtime') 645s cls.notebook_dir = tmp('notebooks') 645s cls.env_patch = patch.dict('os.environ', cls.get_patch_env()) 645s cls.env_patch.start() 645s # Patch systemwide & user-wide data & config directories, to isolate 645s # the tests from oddities of the local setup. But leave Python env 645s # locations alone, so data files for e.g. nbconvert are accessible. 645s # If this isolation isn't sufficient, you may need to run the tests in 645s # a virtualenv or conda env. 645s cls.path_patch = patch.multiple( 645s jupyter_core.paths, 645s SYSTEM_JUPYTER_PATH=[tmp('share', 'jupyter')], 645s SYSTEM_CONFIG_PATH=[tmp('etc', 'jupyter')], 645s ) 645s cls.path_patch.start() 645s 645s config = cls.config or Config() 645s config.NotebookNotary.db_file = ':memory:' 645s 645s cls.token = hexlify(os.urandom(4)).decode('ascii') 645s 645s started = Event() 645s def start_thread(): 645s try: 645s bind_args = cls.get_bind_args() 645s app = cls.notebook = NotebookApp( 645s port_retries=0, 645s open_browser=False, 645s config_dir=cls.config_dir, 645s data_dir=cls.data_dir, 645s runtime_dir=cls.runtime_dir, 645s notebook_dir=cls.notebook_dir, 645s base_url=cls.url_prefix, 645s config=config, 645s allow_root=True, 645s token=cls.token, 645s **bind_args 645s ) 645s if "asyncio" in sys.modules: 645s app._init_asyncio_patch() 645s import asyncio 645s 645s asyncio.set_event_loop(asyncio.new_event_loop()) 645s # Patch the current loop in order to match production 645s # behavior 645s import nest_asyncio 645s 645s nest_asyncio.apply() 645s # don't register signal handler during tests 645s app.init_signal = lambda : None 645s # clear log handlers and propagate to root for nose to capture it 645s # needs to be redone after initialize, which reconfigures logging 645s app.log.propagate = True 645s app.log.handlers = [] 645s app.initialize(argv=cls.get_argv()) 645s app.log.propagate = True 645s app.log.handlers = [] 645s loop = IOLoop.current() 645s loop.add_callback(started.set) 645s app.start() 645s finally: 645s # set the event, so failure to start doesn't cause a hang 645s started.set() 645s app.session_manager.close() 645s cls.notebook_thread = Thread(target=start_thread) 645s cls.notebook_thread.daemon = True 645s cls.notebook_thread.start() 645s started.wait() 645s > cls.wait_until_alive() 645s 645s notebook/tests/launchnotebook.py:198: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s cls = 645s 645s @classmethod 645s def wait_until_alive(cls): 645s """Wait for the server to be alive""" 645s url = cls.base_url() + 'api/contents' 645s for _ in range(int(MAX_WAITTIME/POLL_INTERVAL)): 645s try: 645s cls.fetch_url(url) 645s except ModuleNotFoundError as error: 645s # Errors that should be immediately thrown back to caller 645s raise error 645s except Exception as e: 645s if not cls.notebook_thread.is_alive(): 645s > raise RuntimeError("The notebook server failed to start") from e 645s E RuntimeError: The notebook server failed to start 645s 645s notebook/tests/launchnotebook.py:59: RuntimeError 645s =================================== FAILURES =================================== 645s __________________ TestSessionManager.test_bad_delete_session __________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:336: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.services.contents.manager.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s self = 645s 645s def setUp(self): 645s > self.sm = SessionManager( 645s kernel_manager=DummyMKM(), 645s contents_manager=ContentsManager(), 645s ) 645s 645s notebook/services/sessions/tests/test_sessionmanager.py:45: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:327: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:339: TypeError 645s ___________________ TestSessionManager.test_bad_get_session ____________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:336: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.services.contents.manager.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s self = 645s 645s def setUp(self): 645s > self.sm = SessionManager( 645s kernel_manager=DummyMKM(), 645s contents_manager=ContentsManager(), 645s ) 645s 645s notebook/services/sessions/tests/test_sessionmanager.py:45: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:327: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:339: TypeError 645s __________________ TestSessionManager.test_bad_update_session __________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:336: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.services.contents.manager.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s self = 645s 645s def setUp(self): 645s > self.sm = SessionManager( 645s kernel_manager=DummyMKM(), 645s contents_manager=ContentsManager(), 645s ) 645s 645s notebook/services/sessions/tests/test_sessionmanager.py:45: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:327: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:339: TypeError 645s ____________________ TestSessionManager.test_delete_session ____________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:336: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.services.contents.manager.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s self = 645s 645s def setUp(self): 645s > self.sm = SessionManager( 645s kernel_manager=DummyMKM(), 645s contents_manager=ContentsManager(), 645s ) 645s 645s notebook/services/sessions/tests/test_sessionmanager.py:45: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:327: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:339: TypeError 645s _____________________ TestSessionManager.test_get_session ______________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:336: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.services.contents.manager.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s self = 645s 645s def setUp(self): 645s > self.sm = SessionManager( 645s kernel_manager=DummyMKM(), 645s contents_manager=ContentsManager(), 645s ) 645s 645s notebook/services/sessions/tests/test_sessionmanager.py:45: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:327: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:339: TypeError 645s _______________ TestSessionManager.test_get_session_dead_kernel ________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:336: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.services.contents.manager.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s self = 645s 645s def setUp(self): 645s > self.sm = SessionManager( 645s kernel_manager=DummyMKM(), 645s contents_manager=ContentsManager(), 645s ) 645s 645s notebook/services/sessions/tests/test_sessionmanager.py:45: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:327: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:339: TypeError 645s ____________________ TestSessionManager.test_list_sessions _____________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:336: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.services.contents.manager.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s self = 645s 645s def setUp(self): 645s > self.sm = SessionManager( 645s kernel_manager=DummyMKM(), 645s contents_manager=ContentsManager(), 645s ) 645s 645s notebook/services/sessions/tests/test_sessionmanager.py:45: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:327: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:339: TypeError 645s ______________ TestSessionManager.test_list_sessions_dead_kernel _______________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:336: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.services.contents.manager.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s self = 645s 645s def setUp(self): 645s > self.sm = SessionManager( 645s kernel_manager=DummyMKM(), 645s contents_manager=ContentsManager(), 645s ) 645s 645s notebook/services/sessions/tests/test_sessionmanager.py:45: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:327: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:339: TypeError 645s ____________________ TestSessionManager.test_update_session ____________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:336: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.services.contents.manager.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s self = 645s 645s def setUp(self): 645s > self.sm = SessionManager( 645s kernel_manager=DummyMKM(), 645s contents_manager=ContentsManager(), 645s ) 645s 645s notebook/services/sessions/tests/test_sessionmanager.py:45: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:327: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:339: TypeError 645s _______________________________ test_help_output _______________________________ 645s 645s def test_help_output(): 645s """ipython notebook --help-all works""" 645s > check_help_all_output('notebook') 645s 645s notebook/tests/test_notebookapp.py:28: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s pkg = 'notebook', subcommand = None 645s 645s def check_help_all_output(pkg: str, subcommand: Sequence[str] | None = None) -> tuple[str, str]: 645s """test that `python -m PKG --help-all` works""" 645s cmd = [sys.executable, "-m", pkg] 645s if subcommand: 645s cmd.extend(subcommand) 645s cmd.append("--help-all") 645s out, err, rc = get_output_error_code(cmd) 645s > assert rc == 0, err 645s E AssertionError: Traceback (most recent call last): 645s E File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s E klass = self._resolve_string(klass) 645s E ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s E return import_item(string) 645s E ^^^^^^^^^^^^^^^^^^^ 645s E File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s E module = __import__(package, fromlist=[obj]) 645s E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s E 645s E During handling of the above exception, another exception occurred: 645s E 645s E Traceback (most recent call last): 645s E File "", line 198, in _run_module_as_main 645s E File "", line 88, in _run_code 645s E File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/__main__.py", line 3, in 645s E app.launch_new_instance() 645s E File "/usr/lib/python3/dist-packages/jupyter_core/application.py", line 282, in launch_instance 645s E super().launch_instance(argv=argv, **kwargs) 645s E File "/usr/lib/python3/dist-packages/traitlets/config/application.py", line 1073, in launch_instance 645s E app = cls.instance(**kwargs) 645s E ^^^^^^^^^^^^^^^^^^^^^^ 645s E File "/usr/lib/python3/dist-packages/traitlets/config/configurable.py", line 583, in instance 645s E inst = cls(*args, **kwargs) 645s E ^^^^^^^^^^^^^^^^^^^^ 645s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s E inst.setup_instance(*args, **kwargs) 645s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s E super(HasTraits, self).setup_instance(*args, **kwargs) 645s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s E init(self) 645s E File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s E self._resolve_classes() 645s E File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s E warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s /usr/lib/python3/dist-packages/traitlets/tests/utils.py:38: AssertionError 645s ____________________________ test_server_info_file _____________________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:235: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.contents.services.managers.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s def test_server_info_file(): 645s td = TemporaryDirectory() 645s > nbapp = NotebookApp(runtime_dir=td.name, log=logging.getLogger()) 645s 645s notebook/tests/test_notebookapp.py:32: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:226: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:238: TypeError 645s _________________________________ test_nb_dir __________________________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:235: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.contents.services.managers.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s def test_nb_dir(): 645s with TemporaryDirectory() as td: 645s > app = NotebookApp(notebook_dir=td) 645s 645s notebook/tests/test_notebookapp.py:49: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:226: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:238: TypeError 645s ____________________________ test_no_create_nb_dir _____________________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:235: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.contents.services.managers.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s def test_no_create_nb_dir(): 645s with TemporaryDirectory() as td: 645s nbdir = os.path.join(td, 'notebooks') 645s > app = NotebookApp() 645s 645s notebook/tests/test_notebookapp.py:55: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:226: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:238: TypeError 645s _____________________________ test_missing_nb_dir ______________________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:235: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.contents.services.managers.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s def test_missing_nb_dir(): 645s with TemporaryDirectory() as td: 645s nbdir = os.path.join(td, 'notebook', 'dir', 'is', 'missing') 645s > app = NotebookApp() 645s 645s notebook/tests/test_notebookapp.py:62: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:226: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:238: TypeError 645s _____________________________ test_invalid_nb_dir ______________________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:235: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.contents.services.managers.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s def test_invalid_nb_dir(): 645s with NamedTemporaryFile() as tf: 645s > app = NotebookApp() 645s 645s notebook/tests/test_notebookapp.py:68: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:226: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:238: TypeError 645s ____________________________ test_nb_dir_with_slash ____________________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:235: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.contents.services.managers.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s def test_nb_dir_with_slash(): 645s with TemporaryDirectory(suffix="_slash" + os.sep) as td: 645s > app = NotebookApp(notebook_dir=td) 645s 645s notebook/tests/test_notebookapp.py:74: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:226: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:238: TypeError 645s _______________________________ test_nb_dir_root _______________________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:235: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.contents.services.managers.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s def test_nb_dir_root(): 645s root = os.path.abspath(os.sep) # gets the right value on Windows, Posix 645s > app = NotebookApp(notebook_dir=root) 645s 645s notebook/tests/test_notebookapp.py:79: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:226: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:238: TypeError 645s _____________________________ test_generate_config _____________________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:235: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.contents.services.managers.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s def test_generate_config(): 645s with TemporaryDirectory() as td: 645s > app = NotebookApp(config_dir=td) 645s 645s notebook/tests/test_notebookapp.py:84: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:226: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:238: TypeError 645s ____________________________ test_notebook_password ____________________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:235: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.contents.services.managers.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s def test_notebook_password(): 645s password = 'secret' 645s with TemporaryDirectory() as td: 645s with patch.dict('os.environ', { 645s 'JUPYTER_CONFIG_DIR': td, 645s }), patch.object(getpass, 'getpass', return_value=password): 645s app = notebookapp.NotebookPasswordApp(log_level=logging.ERROR) 645s app.initialize([]) 645s app.start() 645s > nb = NotebookApp() 645s 645s notebook/tests/test_notebookapp.py:133: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:226: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:238: TypeError 645s _________________ TestInstallServerExtension.test_merge_config _________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:235: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.contents.services.managers.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s self = 645s 645s def test_merge_config(self): 645s # enabled at sys level 645s mock_sys = self._inject_mock_extension('mockext_sys') 645s # enabled at sys, disabled at user 645s mock_both = self._inject_mock_extension('mockext_both') 645s # enabled at user 645s mock_user = self._inject_mock_extension('mockext_user') 645s # enabled at Python 645s mock_py = self._inject_mock_extension('mockext_py') 645s 645s toggle_serverextension_python('mockext_sys', enabled=True, user=False) 645s toggle_serverextension_python('mockext_user', enabled=True, user=True) 645s toggle_serverextension_python('mockext_both', enabled=True, user=False) 645s toggle_serverextension_python('mockext_both', enabled=False, user=True) 645s 645s > app = NotebookApp(nbserver_extensions={'mockext_py': True}) 645s 645s notebook/tests/test_serverextensions.py:147: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:226: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:238: TypeError 645s _________________ TestOrderedServerExtension.test_load_ordered _________________ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s > klass = self._resolve_string(klass) 645s 645s notebook/traittypes.py:235: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:2015: in _resolve_string 645s return import_item(string) 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s name = 'jupyter_server.contents.services.managers.ContentsManager' 645s 645s def import_item(name: str) -> Any: 645s """Import and return ``bar`` given the string ``foo.bar``. 645s 645s Calling ``bar = import_item("foo.bar")`` is the functional equivalent of 645s executing the code ``from foo import bar``. 645s 645s Parameters 645s ---------- 645s name : string 645s The fully qualified name of the module/package being imported. 645s 645s Returns 645s ------- 645s mod : module object 645s The module that was imported. 645s """ 645s if not isinstance(name, str): 645s raise TypeError("import_item accepts strings, not '%s'." % type(name)) 645s parts = name.rsplit(".", 1) 645s if len(parts) == 2: 645s # called with 'foo.bar....' 645s package, obj = parts 645s > module = __import__(package, fromlist=[obj]) 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s 645s /usr/lib/python3/dist-packages/traitlets/utils/importstring.py:33: ModuleNotFoundError 645s 645s During handling of the above exception, another exception occurred: 645s 645s self = 645s 645s def test_load_ordered(self): 645s > app = NotebookApp() 645s 645s notebook/tests/test_serverextensions.py:189: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1292: in __new__ 645s inst.setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1335: in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s /usr/lib/python3/dist-packages/traitlets/traitlets.py:1311: in setup_instance 645s init(self) 645s notebook/traittypes.py:226: in instance_init 645s self._resolve_classes() 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s self = 645s 645s def _resolve_classes(self): 645s # Resolve all string names to actual classes. 645s self.importable_klasses = [] 645s for klass in self.klasses: 645s if isinstance(klass, str): 645s try: 645s klass = self._resolve_string(klass) 645s self.importable_klasses.append(klass) 645s except: 645s > warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s notebook/traittypes.py:238: TypeError 645s _______________________________ test_help_output _______________________________ 645s 645s def test_help_output(): 645s """jupyter notebook --help-all works""" 645s # FIXME: will be notebook 645s > check_help_all_output('notebook') 645s 645s notebook/tests/test_utils.py:21: 645s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 645s 645s pkg = 'notebook', subcommand = None 645s 645s def check_help_all_output(pkg: str, subcommand: Sequence[str] | None = None) -> tuple[str, str]: 645s """test that `python -m PKG --help-all` works""" 645s cmd = [sys.executable, "-m", pkg] 645s if subcommand: 645s cmd.extend(subcommand) 645s cmd.append("--help-all") 645s out, err, rc = get_output_error_code(cmd) 645s > assert rc == 0, err 645s E AssertionError: Traceback (most recent call last): 645s E File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s E klass = self._resolve_string(klass) 645s E ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s E return import_item(string) 645s E ^^^^^^^^^^^^^^^^^^^ 645s E File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s E module = __import__(package, fromlist=[obj]) 645s E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s E ModuleNotFoundError: No module named 'jupyter_server' 645s E 645s E During handling of the above exception, another exception occurred: 645s E 645s E Traceback (most recent call last): 645s E File "", line 198, in _run_module_as_main 645s E File "", line 88, in _run_code 645s E File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/__main__.py", line 3, in 645s E app.launch_new_instance() 645s E File "/usr/lib/python3/dist-packages/jupyter_core/application.py", line 282, in launch_instance 645s E super().launch_instance(argv=argv, **kwargs) 645s E File "/usr/lib/python3/dist-packages/traitlets/config/application.py", line 1073, in launch_instance 645s E app = cls.instance(**kwargs) 645s E ^^^^^^^^^^^^^^^^^^^^^^ 645s E File "/usr/lib/python3/dist-packages/traitlets/config/configurable.py", line 583, in instance 645s E inst = cls(*args, **kwargs) 645s E ^^^^^^^^^^^^^^^^^^^^ 645s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s E inst.setup_instance(*args, **kwargs) 645s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s E super(HasTraits, self).setup_instance(*args, **kwargs) 645s E File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s E init(self) 645s E File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s E self._resolve_classes() 645s E File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s E warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s E TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s /usr/lib/python3/dist-packages/traitlets/tests/utils.py:38: AssertionError 645s =============================== warnings summary =============================== 645s notebook/nbextensions.py:15 645s /tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/nbextensions.py:15: DeprecationWarning: Jupyter is migrating its paths to use standard platformdirs 645s given by the platformdirs library. To remove this warning and 645s see the appropriate new directories, set the environment variable 645s `JUPYTER_PLATFORM_DIRS=1` and then run `jupyter --paths`. 645s The use of platformdirs will be the default in `jupyter_core` v6 645s from jupyter_core.paths import ( 645s 645s notebook/utils.py:280 645s notebook/utils.py:280 645s /tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/utils.py:280: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. 645s return LooseVersion(v) >= LooseVersion(check) 645s 645s notebook/_tz.py:29: 1 warning 645s notebook/services/sessions/tests/test_sessionmanager.py: 9 warnings 645s /tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/_tz.py:29: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC). 645s dt = unaware(*args, **kwargs) 645s 645s notebook/tests/test_notebookapp_integration.py:14 645s /tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/test_notebookapp_integration.py:14: PytestUnknownMarkWarning: Unknown pytest.mark.integration_tests - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html 645s pytestmark = pytest.mark.integration_tests 645s 645s notebook/auth/tests/test_login.py::LoginTest::test_next_bad 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-1 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/bundler/tests/test_bundler_api.py::BundleAPITest::test_bundler_import_error 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-2 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/services/api/tests/test_api.py::APITest::test_get_spec 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-3 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/services/config/tests/test_config_api.py::APITest::test_create_retrieve_config 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-4 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/services/contents/tests/test_contents_api.py::APITest::test_checkpoints 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-5 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_checkpoints 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-6 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/services/contents/tests/test_largefilemanager.py: 42 warnings 645s notebook/services/contents/tests/test_manager.py: 526 warnings 645s /tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/_tz.py:29: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC). 645s dt = unaware(*args, **kwargs) 645s 645s notebook/services/kernels/tests/test_kernels_api.py::KernelAPITest::test_connections 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-7 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/services/kernels/tests/test_kernels_api.py::AsyncKernelAPITest::test_connections 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-8 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/services/kernels/tests/test_kernels_api.py::KernelFilterTest::test_config 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-9 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/services/kernels/tests/test_kernels_api.py::KernelCullingTest::test_culling 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-10 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_get_kernel_resource_file 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-11 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/services/nbconvert/tests/test_nbconvert_api.py::APITest::test_list_formats 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-12 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_create 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-13 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_create 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-14 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_create_terminal 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-15 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/terminal/tests/test_terminals_api.py::TerminalCullingTest::test_config 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-16 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/tests/test_files.py::FilesTest::test_contents_manager 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-17 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/tests/test_gateway.py::TestGateway::test_gateway_class_mappings 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-18 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/tests/test_nbextensions.py::TestInstallNBExtension::test_install_tar 645s notebook/tests/test_nbextensions.py::TestInstallNBExtension::test_install_tar 645s notebook/tests/test_nbextensions.py::TestInstallNBExtension::test_install_tar 645s /usr/lib/python3.12/tarfile.py:2221: DeprecationWarning: Python 3.14 will, by default, filter extracted tar archives and reject files or modify their metadata. Use the filter argument to control this behavior. 645s warnings.warn( 645s 645s notebook/tests/test_notebookapp.py::NotebookAppTests::test_list_running_servers 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-19 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/tests/test_notebookapp.py::NotebookUnixSocketTests::test_list_running_sock_servers 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-20 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/tests/test_notebookapp.py::NotebookAppJSONLoggingTests::test_log_json_enabled 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-21 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/tests/test_paths.py::RedirectTestCase::test_trailing_slash 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-22 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s notebook/tree/tests/test_tree_handler.py::TreeTest::test_redirect 645s /usr/lib/python3/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-23 (start_thread) 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 235, in _resolve_classes 645s klass = self._resolve_string(klass) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 2015, in _resolve_string 645s return import_item(string) 645s ^^^^^^^^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/utils/importstring.py", line 33, in import_item 645s module = __import__(package, fromlist=[obj]) 645s ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 645s ModuleNotFoundError: No module named 'jupyter_server' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 155, in start_thread 645s app = cls.notebook = NotebookApp( 645s ^^^^^^^^^^^^ 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1292, in __new__ 645s inst.setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1335, in setup_instance 645s super(HasTraits, self).setup_instance(*args, **kwargs) 645s File "/usr/lib/python3/dist-packages/traitlets/traitlets.py", line 1311, in setup_instance 645s init(self) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 226, in instance_init 645s self._resolve_classes() 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/traittypes.py", line 238, in _resolve_classes 645s warn(f"{klass} is not importable. Is it installed?", ImportWarning) 645s TypeError: warn() missing 1 required keyword-only argument: 'stacklevel' 645s 645s During handling of the above exception, another exception occurred: 645s 645s Traceback (most recent call last): 645s File "/usr/lib/python3.12/threading.py", line 1073, in _bootstrap_inner 645s self.run() 645s File "/usr/lib/python3.12/threading.py", line 1010, in run 645s self._target(*self._args, **self._kwargs) 645s File "/tmp/autopkgtest.FMSSaJ/build.uPX/src/notebook/tests/launchnotebook.py", line 193, in start_thread 645s app.session_manager.close() 645s ^^^ 645s UnboundLocalError: cannot access local variable 'app' where it is not associated with a value 645s 645s warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) 645s 645s -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html 645s =========================== short test summary info ============================ 645s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_bad_delete_session 645s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_bad_get_session 645s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_bad_update_session 645s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_delete_session 645s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_get_session 645s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_get_session_dead_kernel 645s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_list_sessions 645s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_list_sessions_dead_kernel 645s FAILED notebook/services/sessions/tests/test_sessionmanager.py::TestSessionManager::test_update_session 645s FAILED notebook/tests/test_notebookapp.py::test_help_output - AssertionError:... 645s FAILED notebook/tests/test_notebookapp.py::test_server_info_file - TypeError:... 645s FAILED notebook/tests/test_notebookapp.py::test_nb_dir - TypeError: warn() mi... 645s FAILED notebook/tests/test_notebookapp.py::test_no_create_nb_dir - TypeError:... 645s FAILED notebook/tests/test_notebookapp.py::test_missing_nb_dir - TypeError: w... 645s FAILED notebook/tests/test_notebookapp.py::test_invalid_nb_dir - TypeError: w... 645s FAILED notebook/tests/test_notebookapp.py::test_nb_dir_with_slash - TypeError... 645s FAILED notebook/tests/test_notebookapp.py::test_nb_dir_root - TypeError: warn... 645s FAILED notebook/tests/test_notebookapp.py::test_generate_config - TypeError: ... 645s FAILED notebook/tests/test_notebookapp.py::test_notebook_password - TypeError... 645s FAILED notebook/tests/test_serverextensions.py::TestInstallServerExtension::test_merge_config 645s FAILED notebook/tests/test_serverextensions.py::TestOrderedServerExtension::test_load_ordered 645s FAILED notebook/tests/test_utils.py::test_help_output - AssertionError: Trace... 645s ERROR notebook/auth/tests/test_login.py::LoginTest::test_next_bad - RuntimeEr... 645s ERROR notebook/auth/tests/test_login.py::LoginTest::test_next_ok - RuntimeErr... 645s ERROR notebook/bundler/tests/test_bundler_api.py::BundleAPITest::test_bundler_import_error 645s ERROR notebook/bundler/tests/test_bundler_api.py::BundleAPITest::test_bundler_invoke 645s ERROR notebook/bundler/tests/test_bundler_api.py::BundleAPITest::test_bundler_not_enabled 645s ERROR notebook/bundler/tests/test_bundler_api.py::BundleAPITest::test_missing_bundler_arg 645s ERROR notebook/bundler/tests/test_bundler_api.py::BundleAPITest::test_notebook_not_found 645s ERROR notebook/services/api/tests/test_api.py::APITest::test_get_spec - Runti... 645s ERROR notebook/services/api/tests/test_api.py::APITest::test_get_status - Run... 645s ERROR notebook/services/api/tests/test_api.py::APITest::test_no_track_activity 645s ERROR notebook/services/config/tests/test_config_api.py::APITest::test_create_retrieve_config 645s ERROR notebook/services/config/tests/test_config_api.py::APITest::test_get_unknown 645s ERROR notebook/services/config/tests/test_config_api.py::APITest::test_modify 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_checkpoints 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_checkpoints_separate_root 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy_400_hidden 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy_copy 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy_dir_400 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy_path 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy_put_400 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_copy_put_400_hidden 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_create_untitled 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_create_untitled_txt 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_delete_hidden_dir 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_delete_hidden_file 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_file_checkpoints 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_404_hidden 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_bad_type 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_binary_file_contents 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_contents_no_such_file 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_dir_no_content 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_nb_contents 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_nb_invalid 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_nb_no_content 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_get_text_file_contents 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_list_dirs 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_list_nonexistant_dir 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_list_notebooks 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_mkdir 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_mkdir_hidden_400 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_mkdir_untitled 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_rename 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_rename_400_hidden 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_rename_existing 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_save 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_upload 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_upload_b64 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_upload_txt 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_upload_txt_hidden 645s ERROR notebook/services/contents/tests/test_contents_api.py::APITest::test_upload_v2 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_checkpoints 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_checkpoints_separate_root 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_config_did_something 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy_400_hidden 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy_copy 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy_dir_400 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy_path 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy_put_400 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_copy_put_400_hidden 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_create_untitled 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_create_untitled_txt 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_delete_hidden_dir 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_delete_hidden_file 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_file_checkpoints 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_404_hidden 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_bad_type 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_binary_file_contents 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_contents_no_such_file 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_dir_no_content 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_nb_contents 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_nb_invalid 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_nb_no_content 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_get_text_file_contents 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_list_dirs 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_list_nonexistant_dir 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_list_notebooks 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_mkdir 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_mkdir_hidden_400 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_mkdir_untitled 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_rename 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_rename_400_hidden 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_rename_existing 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_save 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_upload 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_upload_b64 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_upload_txt 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_upload_txt_hidden 645s ERROR notebook/services/contents/tests/test_contents_api.py::GenericFileCheckpointsAPITest::test_upload_v2 645s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelAPITest::test_connections 645s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelAPITest::test_default_kernel 645s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelAPITest::test_kernel_handler 645s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelAPITest::test_main_kernel_handler 645s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelAPITest::test_no_kernels 645s ERROR notebook/services/kernels/tests/test_kernels_api.py::AsyncKernelAPITest::test_connections 645s ERROR notebook/services/kernels/tests/test_kernels_api.py::AsyncKernelAPITest::test_default_kernel 645s ERROR notebook/services/kernels/tests/test_kernels_api.py::AsyncKernelAPITest::test_kernel_handler 645s ERROR notebook/services/kernels/tests/test_kernels_api.py::AsyncKernelAPITest::test_main_kernel_handler 645s ERROR notebook/services/kernels/tests/test_kernels_api.py::AsyncKernelAPITest::test_no_kernels 645s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelFilterTest::test_config 645s ERROR notebook/services/kernels/tests/test_kernels_api.py::KernelCullingTest::test_culling 645s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_get_kernel_resource_file 645s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_get_kernelspec 645s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_get_kernelspec_spaces 645s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_get_nonexistant_kernelspec 645s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_get_nonexistant_resource 645s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_list_kernelspecs 645s ERROR notebook/services/kernelspecs/tests/test_kernelspecs_api.py::APITest::test_list_kernelspecs_bad 645s ERROR notebook/services/nbconvert/tests/test_nbconvert_api.py::APITest::test_list_formats 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_create 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_create_console_session 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_create_deprecated 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_create_file_session 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_create_with_kernel_id 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_delete 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_modify_kernel_id 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_modify_kernel_name 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_modify_path 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_modify_path_deprecated 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::SessionAPITest::test_modify_type 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_create 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_create_console_session 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_create_deprecated 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_create_file_session 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_create_with_kernel_id 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_delete 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_modify_kernel_id 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_modify_kernel_name 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_modify_path 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_modify_path_deprecated 645s ERROR notebook/services/sessions/tests/test_sessions_api.py::AsyncSessionAPITest::test_modify_type 645s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_create_terminal 645s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_create_terminal_via_get 645s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_create_terminal_with_name 645s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_no_terminals 645s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_terminal_handler 645s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalAPITest::test_terminal_root_handler 645s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalCullingTest::test_config 645s ERROR notebook/terminal/tests/test_terminals_api.py::TerminalCullingTest::test_culling 645s ERROR notebook/tests/test_files.py::FilesTest::test_contents_manager - Runtim... 645s ERROR notebook/tests/test_files.py::FilesTest::test_download - RuntimeError: ... 645s ERROR notebook/tests/test_files.py::FilesTest::test_hidden_files - RuntimeErr... 645s ERROR notebook/tests/test_files.py::FilesTest::test_old_files_redirect - Runt... 645s ERROR notebook/tests/test_files.py::FilesTest::test_view_html - RuntimeError:... 645s ERROR notebook/tests/test_gateway.py::TestGateway::test_gateway_class_mappings 645s ERROR notebook/tests/test_gateway.py::TestGateway::test_gateway_get_kernelspecs 645s ERROR notebook/tests/test_gateway.py::TestGateway::test_gateway_get_named_kernelspec 645s ERROR notebook/tests/test_gateway.py::TestGateway::test_gateway_kernel_lifecycle 645s ERROR notebook/tests/test_gateway.py::TestGateway::test_gateway_options - Run... 645s ERROR notebook/tests/test_gateway.py::TestGateway::test_gateway_session_lifecycle 645s ERROR notebook/tests/test_notebookapp.py::NotebookAppTests::test_list_running_servers 645s ERROR notebook/tests/test_notebookapp.py::NotebookAppTests::test_log_json_default 645s ERROR notebook/tests/test_notebookapp.py::NotebookAppTests::test_validate_log_json 645s ERROR notebook/tests/test_notebookapp.py::NotebookUnixSocketTests::test_list_running_sock_servers 645s ERROR notebook/tests/test_notebookapp.py::NotebookUnixSocketTests::test_run 645s ERROR notebook/tests/test_notebookapp.py::NotebookAppJSONLoggingTests::test_log_json_enabled 645s ERROR notebook/tests/test_notebookapp.py::NotebookAppJSONLoggingTests::test_validate_log_json 645s ERROR notebook/tests/test_paths.py::RedirectTestCase::test_trailing_slash - R... 645s ERROR notebook/tree/tests/test_tree_handler.py::TreeTest::test_redirect - Run... 645s = 22 failed, 123 passed, 20 skipped, 5 deselected, 608 warnings, 160 errors in 29.77s = 646s autopkgtest [23:20:14]: test pytest: -----------------------] 646s pytest FAIL non-zero exit status 1 646s autopkgtest [23:20:14]: test pytest: - - - - - - - - - - results - - - - - - - - - - 646s autopkgtest [23:20:14]: test command1: preparing testbed 1035s autopkgtest [23:26:43]: testbed dpkg architecture: amd64 1035s autopkgtest [23:26:43]: testbed apt version: 2.7.14build2 1035s autopkgtest [23:26:43]: test architecture: i386 1035s autopkgtest [23:26:43]: @@@@@@@@@@@@@@@@@@@@ test bed setup 1035s Get:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease [73.9 kB] 1035s Get:2 http://ftpmaster.internal/ubuntu oracular-proposed/universe Sources [1145 kB] 1035s Get:3 http://ftpmaster.internal/ubuntu oracular-proposed/restricted Sources [1964 B] 1035s Get:4 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse Sources [17.6 kB] 1035s Get:5 http://ftpmaster.internal/ubuntu oracular-proposed/main Sources [128 kB] 1035s Get:6 http://ftpmaster.internal/ubuntu oracular-proposed/main i386 Packages [171 kB] 1035s Get:7 http://ftpmaster.internal/ubuntu oracular-proposed/main amd64 Packages [215 kB] 1035s Get:8 http://ftpmaster.internal/ubuntu oracular-proposed/restricted amd64 Packages [7700 B] 1035s Get:9 http://ftpmaster.internal/ubuntu oracular-proposed/universe i386 Packages [523 kB] 1035s Get:10 http://ftpmaster.internal/ubuntu oracular-proposed/universe amd64 Packages [1033 kB] 1035s Get:11 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse i386 Packages [19.0 kB] 1035s Get:12 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse amd64 Packages [53.1 kB] 1036s Fetched 3388 kB in 1s (6167 kB/s) 1036s Reading package lists... 1037s Reading package lists... 1037s Building dependency tree... 1037s Reading state information... 1038s Calculating upgrade... 1038s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1038s Reading package lists... 1038s Building dependency tree... 1038s Reading state information... 1038s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1038s Hit:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease 1038s Hit:2 http://ftpmaster.internal/ubuntu oracular InRelease 1038s Hit:3 http://ftpmaster.internal/ubuntu oracular-updates InRelease 1038s Hit:4 http://ftpmaster.internal/ubuntu oracular-security InRelease 1040s Reading package lists... 1040s Reading package lists... 1040s Building dependency tree... 1040s Reading state information... 1040s Calculating upgrade... 1040s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1040s Reading package lists... 1041s Building dependency tree... 1041s Reading state information... 1041s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1044s Note, using file '/tmp/autopkgtest.FMSSaJ/3-autopkgtest-satdep.dsc' to get the build dependencies 1044s Reading package lists... 1044s Building dependency tree... 1044s Reading state information... 1044s Starting pkgProblemResolver with broken count: 0 1044s Starting 2 pkgProblemResolver with broken count: 0 1044s Done 1045s The following NEW packages will be installed: 1045s build-essential cpp cpp-13 cpp-13-x86-64-linux-gnu cpp-x86-64-linux-gnu 1045s fonts-font-awesome fonts-glyphicons-halflings fonts-lato fonts-mathjax g++ 1045s g++-13 g++-13-x86-64-linux-gnu g++-x86-64-linux-gnu gcc gcc-13 gcc-13-base 1045s gcc-13-x86-64-linux-gnu gcc-x86-64-linux-gnu gdb jupyter-core 1045s jupyter-notebook libasan8 libatomic1 libbabeltrace1 libcc1-0 1045s libdebuginfod-common libdebuginfod1t64 libgcc-13-dev libgomp1 libhwasan0 1045s libipt2 libisl23 libitm1 libjs-backbone libjs-bootstrap libjs-bootstrap-tour 1045s libjs-codemirror libjs-es6-promise libjs-jed libjs-jquery 1045s libjs-jquery-typeahead libjs-jquery-ui libjs-marked libjs-mathjax 1045s libjs-moment libjs-requirejs libjs-requirejs-text libjs-sphinxdoc 1045s libjs-text-encoding libjs-underscore libjs-xterm liblsan0 libmpc3 1045s libnorm1t64 libpgm-5.3-0t64 libpython3.12t64 libquadmath0 libsodium23 1045s libsource-highlight-common libsource-highlight4t64 libstdc++-13-dev libtsan2 1045s libubsan1 libxslt1.1 libzmq5 node-jed python-notebook-doc 1045s python-tinycss2-common python3-argon2 python3-asttokens python3-bleach 1045s python3-bs4 python3-bytecode python3-comm python3-coverage python3-dateutil 1045s python3-debugpy python3-decorator python3-defusedxml python3-entrypoints 1045s python3-executing python3-fastjsonschema python3-html5lib python3-ipykernel 1045s python3-ipython python3-ipython-genutils python3-jedi python3-jupyter-client 1045s python3-jupyter-core python3-jupyterlab-pygments python3-lxml 1045s python3-lxml-html-clean python3-matplotlib-inline python3-nbclient 1045s python3-nbconvert python3-nbformat python3-nest-asyncio python3-notebook 1045s python3-packaging python3-pandocfilters python3-parso python3-pexpect 1045s python3-platformdirs python3-prometheus-client python3-prompt-toolkit 1045s python3-psutil python3-ptyprocess python3-pure-eval python3-py 1045s python3-pydevd python3-send2trash python3-soupsieve python3-stack-data 1045s python3-terminado python3-tinycss2 python3-tornado python3-traitlets 1045s python3-typeshed python3-wcwidth python3-webencodings python3-zmq 1045s sphinx-rtd-theme-common 1045s 0 upgraded, 122 newly installed, 0 to remove and 0 not upgraded. 1045s Need to get 96.8 MB of archives. 1045s After this operation, 396 MB of additional disk space will be used. 1045s Get:1 http://ftpmaster.internal/ubuntu oracular/main amd64 fonts-lato all 2.015-1 [2781 kB] 1045s Get:2 http://ftpmaster.internal/ubuntu oracular/main amd64 libdebuginfod-common all 0.190-1.1build4 [14.2 kB] 1045s Get:3 http://ftpmaster.internal/ubuntu oracular/main amd64 gcc-13-base amd64 13.2.0-23ubuntu4 [49.0 kB] 1045s Get:4 http://ftpmaster.internal/ubuntu oracular/main amd64 libisl23 amd64 0.26-3build1 [680 kB] 1045s Get:5 http://ftpmaster.internal/ubuntu oracular/main amd64 libmpc3 amd64 1.3.1-1build1 [54.5 kB] 1045s Get:6 http://ftpmaster.internal/ubuntu oracular/main amd64 cpp-13-x86-64-linux-gnu amd64 13.2.0-23ubuntu4 [11.2 MB] 1045s Get:7 http://ftpmaster.internal/ubuntu oracular/main amd64 cpp-13 amd64 13.2.0-23ubuntu4 [1032 B] 1045s Get:8 http://ftpmaster.internal/ubuntu oracular/main amd64 cpp-x86-64-linux-gnu amd64 4:13.2.0-7ubuntu1 [5326 B] 1045s Get:9 http://ftpmaster.internal/ubuntu oracular/main amd64 cpp amd64 4:13.2.0-7ubuntu1 [22.4 kB] 1045s Get:10 http://ftpmaster.internal/ubuntu oracular/main amd64 libcc1-0 amd64 14-20240412-0ubuntu1 [47.7 kB] 1045s Get:11 http://ftpmaster.internal/ubuntu oracular/main amd64 libgomp1 amd64 14-20240412-0ubuntu1 [147 kB] 1045s Get:12 http://ftpmaster.internal/ubuntu oracular/main amd64 libitm1 amd64 14-20240412-0ubuntu1 [28.9 kB] 1045s Get:13 http://ftpmaster.internal/ubuntu oracular/main amd64 libatomic1 amd64 14-20240412-0ubuntu1 [10.4 kB] 1045s Get:14 http://ftpmaster.internal/ubuntu oracular/main amd64 libasan8 amd64 14-20240412-0ubuntu1 [3024 kB] 1045s Get:15 http://ftpmaster.internal/ubuntu oracular/main amd64 liblsan0 amd64 14-20240412-0ubuntu1 [1313 kB] 1045s Get:16 http://ftpmaster.internal/ubuntu oracular/main amd64 libtsan2 amd64 14-20240412-0ubuntu1 [2736 kB] 1045s Get:17 http://ftpmaster.internal/ubuntu oracular/main amd64 libubsan1 amd64 14-20240412-0ubuntu1 [1175 kB] 1045s Get:18 http://ftpmaster.internal/ubuntu oracular/main amd64 libhwasan0 amd64 14-20240412-0ubuntu1 [1632 kB] 1045s Get:19 http://ftpmaster.internal/ubuntu oracular/main amd64 libquadmath0 amd64 14-20240412-0ubuntu1 [153 kB] 1045s Get:20 http://ftpmaster.internal/ubuntu oracular/main amd64 libgcc-13-dev amd64 13.2.0-23ubuntu4 [2688 kB] 1045s Get:21 http://ftpmaster.internal/ubuntu oracular/main amd64 gcc-13-x86-64-linux-gnu amd64 13.2.0-23ubuntu4 [21.9 MB] 1045s Get:22 http://ftpmaster.internal/ubuntu oracular/main amd64 gcc-13 amd64 13.2.0-23ubuntu4 [482 kB] 1045s Get:23 http://ftpmaster.internal/ubuntu oracular/main amd64 gcc-x86-64-linux-gnu amd64 4:13.2.0-7ubuntu1 [1212 B] 1045s Get:24 http://ftpmaster.internal/ubuntu oracular/main amd64 gcc amd64 4:13.2.0-7ubuntu1 [5018 B] 1045s Get:25 http://ftpmaster.internal/ubuntu oracular/main amd64 libstdc++-13-dev amd64 13.2.0-23ubuntu4 [2399 kB] 1045s Get:26 http://ftpmaster.internal/ubuntu oracular/main amd64 g++-13-x86-64-linux-gnu amd64 13.2.0-23ubuntu4 [12.5 MB] 1045s Get:27 http://ftpmaster.internal/ubuntu oracular/main amd64 g++-13 amd64 13.2.0-23ubuntu4 [14.5 kB] 1045s Get:28 http://ftpmaster.internal/ubuntu oracular/main amd64 g++-x86-64-linux-gnu amd64 4:13.2.0-7ubuntu1 [964 B] 1045s Get:29 http://ftpmaster.internal/ubuntu oracular/main amd64 g++ amd64 4:13.2.0-7ubuntu1 [1100 B] 1045s Get:30 http://ftpmaster.internal/ubuntu oracular/main amd64 build-essential amd64 12.10ubuntu1 [4928 B] 1045s Get:31 http://ftpmaster.internal/ubuntu oracular/main amd64 fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 1045s Get:32 http://ftpmaster.internal/ubuntu oracular/universe amd64 fonts-glyphicons-halflings all 1.009~3.4.1+dfsg-3 [118 kB] 1045s Get:33 http://ftpmaster.internal/ubuntu oracular/main amd64 fonts-mathjax all 2.7.9+dfsg-1 [2208 kB] 1045s Get:34 http://ftpmaster.internal/ubuntu oracular/main amd64 libbabeltrace1 amd64 1.5.11-3build3 [164 kB] 1045s Get:35 http://ftpmaster.internal/ubuntu oracular/main amd64 libdebuginfod1t64 amd64 0.190-1.1build4 [17.1 kB] 1045s Get:36 http://ftpmaster.internal/ubuntu oracular/main amd64 libipt2 amd64 2.0.6-1build1 [45.7 kB] 1045s Get:37 http://ftpmaster.internal/ubuntu oracular/main amd64 libpython3.12t64 amd64 3.12.3-1 [2339 kB] 1045s Get:38 http://ftpmaster.internal/ubuntu oracular/main amd64 libsource-highlight-common all 3.1.9-4.3build1 [64.2 kB] 1045s Get:39 http://ftpmaster.internal/ubuntu oracular/main amd64 libsource-highlight4t64 amd64 3.1.9-4.3build1 [258 kB] 1045s Get:40 http://ftpmaster.internal/ubuntu oracular/main amd64 gdb amd64 15.0.50.20240403-0ubuntu1 [4010 kB] 1045s Get:41 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-platformdirs all 4.2.0-1 [16.1 kB] 1045s Get:42 http://ftpmaster.internal/ubuntu oracular-proposed/universe amd64 python3-traitlets all 5.14.3-1 [71.3 kB] 1045s Get:43 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-jupyter-core all 5.3.2-1ubuntu1 [25.5 kB] 1045s Get:44 http://ftpmaster.internal/ubuntu oracular/universe amd64 jupyter-core all 5.3.2-1ubuntu1 [4044 B] 1045s Get:45 http://ftpmaster.internal/ubuntu oracular/main amd64 libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 1045s Get:46 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-backbone all 1.4.1~dfsg+~1.4.15-3 [185 kB] 1045s Get:47 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-bootstrap all 3.4.1+dfsg-3 [129 kB] 1045s Get:48 http://ftpmaster.internal/ubuntu oracular/main amd64 libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 1045s Get:49 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-bootstrap-tour all 0.12.0+dfsg-5 [21.4 kB] 1045s Get:50 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-codemirror all 5.65.0+~cs5.83.9-3 [755 kB] 1045s Get:51 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-es6-promise all 4.2.8-12 [14.1 kB] 1045s Get:52 http://ftpmaster.internal/ubuntu oracular/universe amd64 node-jed all 1.1.1-4 [15.2 kB] 1045s Get:53 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-jed all 1.1.1-4 [2584 B] 1045s Get:54 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-jquery-typeahead all 2.11.0+dfsg1-3 [48.9 kB] 1045s Get:55 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-jquery-ui all 1.13.2+dfsg-1 [252 kB] 1045s Get:56 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-marked all 4.2.3+ds+~4.0.7-3 [36.2 kB] 1045s Get:57 http://ftpmaster.internal/ubuntu oracular/main amd64 libjs-mathjax all 2.7.9+dfsg-1 [5665 kB] 1046s Get:58 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-moment all 2.29.4+ds-1 [147 kB] 1046s Get:59 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-requirejs all 2.3.6+ds+~2.1.34-2 [201 kB] 1046s Get:60 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-requirejs-text all 2.0.12-1.1 [9056 B] 1046s Get:61 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-text-encoding all 0.7.0-5 [140 kB] 1046s Get:62 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-xterm all 5.3.0-2 [476 kB] 1046s Get:63 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-ptyprocess all 0.7.0-5 [15.1 kB] 1046s Get:64 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-tornado amd64 6.4.0-1build1 [297 kB] 1046s Get:65 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-terminado all 0.17.1-1 [15.9 kB] 1046s Get:66 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-argon2 amd64 21.1.0-2build1 [21.0 kB] 1046s Get:67 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-comm all 0.2.1-1 [7016 B] 1046s Get:68 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-bytecode all 0.15.1-3 [44.7 kB] 1046s Get:69 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-coverage amd64 7.4.4+dfsg1-0ubuntu2 [147 kB] 1046s Get:70 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-pydevd amd64 2.10.0+ds-10ubuntu1 [637 kB] 1046s Get:71 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-debugpy all 1.8.0+ds-4ubuntu4 [67.6 kB] 1046s Get:72 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-decorator all 5.1.1-5 [10.1 kB] 1046s Get:73 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-parso all 0.8.3-1 [67.2 kB] 1046s Get:74 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-typeshed all 0.0~git20231111.6764465-3 [1274 kB] 1046s Get:75 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-jedi all 0.19.1+ds1-1 [693 kB] 1046s Get:76 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-matplotlib-inline all 0.1.6-2 [8784 B] 1046s Get:77 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-pexpect all 4.9-2 [48.1 kB] 1046s Get:78 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-wcwidth all 0.2.5+dfsg1-1.1ubuntu1 [22.5 kB] 1046s Get:79 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-prompt-toolkit all 3.0.43-1 [256 kB] 1046s Get:80 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-asttokens all 2.4.1-1 [20.9 kB] 1046s Get:81 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-executing all 2.0.1-0.1 [23.3 kB] 1046s Get:82 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-pure-eval all 0.2.2-2 [11.1 kB] 1046s Get:83 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-stack-data all 0.6.3-1 [22.0 kB] 1046s Get:84 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-ipython all 8.20.0-1 [561 kB] 1046s Get:85 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-dateutil all 2.8.2-3ubuntu1 [79.4 kB] 1046s Get:86 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-entrypoints all 0.4-2 [7146 B] 1046s Get:87 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-nest-asyncio all 1.5.4-1 [6256 B] 1046s Get:88 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-py all 1.11.0-2 [72.7 kB] 1046s Get:89 http://ftpmaster.internal/ubuntu oracular/universe amd64 libnorm1t64 amd64 1.5.9+dfsg-3.1build1 [154 kB] 1046s Get:90 http://ftpmaster.internal/ubuntu oracular/universe amd64 libpgm-5.3-0t64 amd64 5.3.128~dfsg-2.1build1 [167 kB] 1046s Get:91 http://ftpmaster.internal/ubuntu oracular/main amd64 libsodium23 amd64 1.0.18-1build3 [161 kB] 1046s Get:92 http://ftpmaster.internal/ubuntu oracular/universe amd64 libzmq5 amd64 4.3.5-1build2 [260 kB] 1046s Get:93 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-zmq amd64 24.0.1-5build1 [286 kB] 1046s Get:94 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-jupyter-client all 7.4.9-2ubuntu1 [90.5 kB] 1046s Get:95 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-packaging all 24.0-1 [41.1 kB] 1046s Get:96 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-psutil amd64 5.9.8-2build2 [195 kB] 1046s Get:97 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-ipykernel all 6.29.3-1 [82.4 kB] 1046s Get:98 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-ipython-genutils all 0.2.0-6 [22.0 kB] 1046s Get:99 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-webencodings all 0.5.1-5 [11.5 kB] 1046s Get:100 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-html5lib all 1.1-6 [88.8 kB] 1046s Get:101 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-bleach all 6.1.0-2 [49.6 kB] 1046s Get:102 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-soupsieve all 2.5-1 [33.0 kB] 1046s Get:103 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-bs4 all 4.12.3-1 [109 kB] 1046s Get:104 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-defusedxml all 0.7.1-2 [42.0 kB] 1046s Get:105 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-jupyterlab-pygments all 0.2.2-3 [6054 B] 1046s Get:106 http://ftpmaster.internal/ubuntu oracular/main amd64 libxslt1.1 amd64 1.1.39-0exp1build1 [167 kB] 1046s Get:107 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-lxml amd64 5.2.1-1 [1243 kB] 1046s Get:108 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-fastjsonschema all 2.19.0-1 [19.6 kB] 1046s Get:109 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-nbformat all 5.9.1-1 [41.2 kB] 1046s Get:110 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-nbclient all 0.8.0-1 [55.6 kB] 1046s Get:111 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-pandocfilters all 1.5.1-1 [23.6 kB] 1046s Get:112 http://ftpmaster.internal/ubuntu oracular/universe amd64 python-tinycss2-common all 1.2.1-2 [33.9 kB] 1046s Get:113 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-tinycss2 all 1.2.1-2 [19.6 kB] 1046s Get:114 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-lxml-html-clean all 0.1.1-1 [12.0 kB] 1046s Get:115 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-nbconvert all 6.5.3-5 [152 kB] 1046s Get:116 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-prometheus-client all 0.19.0+ds1-1 [41.7 kB] 1046s Get:117 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-send2trash all 1.8.2-1 [15.5 kB] 1046s Get:118 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-notebook all 6.4.12-2.2ubuntu1 [1566 kB] 1046s Get:119 http://ftpmaster.internal/ubuntu oracular/universe amd64 jupyter-notebook all 6.4.12-2.2ubuntu1 [10.4 kB] 1046s Get:120 http://ftpmaster.internal/ubuntu oracular/main amd64 libjs-sphinxdoc all 7.2.6-6 [149 kB] 1046s Get:121 http://ftpmaster.internal/ubuntu oracular/main amd64 sphinx-rtd-theme-common all 2.0.0+dfsg-1 [1012 kB] 1046s Get:122 http://ftpmaster.internal/ubuntu oracular/universe amd64 python-notebook-doc all 6.4.12-2.2ubuntu1 [2540 kB] 1046s Preconfiguring packages ... 1046s Fetched 96.8 MB in 1s (97.5 MB/s) 1047s Selecting previously unselected package fonts-lato. 1047s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 73897 files and directories currently installed.) 1047s Preparing to unpack .../000-fonts-lato_2.015-1_all.deb ... 1047s Unpacking fonts-lato (2.015-1) ... 1047s Selecting previously unselected package libdebuginfod-common. 1047s Preparing to unpack .../001-libdebuginfod-common_0.190-1.1build4_all.deb ... 1047s Unpacking libdebuginfod-common (0.190-1.1build4) ... 1047s Selecting previously unselected package gcc-13-base:amd64. 1047s Preparing to unpack .../002-gcc-13-base_13.2.0-23ubuntu4_amd64.deb ... 1047s Unpacking gcc-13-base:amd64 (13.2.0-23ubuntu4) ... 1047s Selecting previously unselected package libisl23:amd64. 1047s Preparing to unpack .../003-libisl23_0.26-3build1_amd64.deb ... 1047s Unpacking libisl23:amd64 (0.26-3build1) ... 1047s Selecting previously unselected package libmpc3:amd64. 1047s Preparing to unpack .../004-libmpc3_1.3.1-1build1_amd64.deb ... 1047s Unpacking libmpc3:amd64 (1.3.1-1build1) ... 1047s Selecting previously unselected package cpp-13-x86-64-linux-gnu. 1047s Preparing to unpack .../005-cpp-13-x86-64-linux-gnu_13.2.0-23ubuntu4_amd64.deb ... 1047s Unpacking cpp-13-x86-64-linux-gnu (13.2.0-23ubuntu4) ... 1047s Selecting previously unselected package cpp-13. 1047s Preparing to unpack .../006-cpp-13_13.2.0-23ubuntu4_amd64.deb ... 1047s Unpacking cpp-13 (13.2.0-23ubuntu4) ... 1047s Selecting previously unselected package cpp-x86-64-linux-gnu. 1047s Preparing to unpack .../007-cpp-x86-64-linux-gnu_4%3a13.2.0-7ubuntu1_amd64.deb ... 1047s Unpacking cpp-x86-64-linux-gnu (4:13.2.0-7ubuntu1) ... 1047s Selecting previously unselected package cpp. 1047s Preparing to unpack .../008-cpp_4%3a13.2.0-7ubuntu1_amd64.deb ... 1047s Unpacking cpp (4:13.2.0-7ubuntu1) ... 1047s Selecting previously unselected package libcc1-0:amd64. 1047s Preparing to unpack .../009-libcc1-0_14-20240412-0ubuntu1_amd64.deb ... 1047s Unpacking libcc1-0:amd64 (14-20240412-0ubuntu1) ... 1047s Selecting previously unselected package libgomp1:amd64. 1047s Preparing to unpack .../010-libgomp1_14-20240412-0ubuntu1_amd64.deb ... 1047s Unpacking libgomp1:amd64 (14-20240412-0ubuntu1) ... 1047s Selecting previously unselected package libitm1:amd64. 1047s Preparing to unpack .../011-libitm1_14-20240412-0ubuntu1_amd64.deb ... 1047s Unpacking libitm1:amd64 (14-20240412-0ubuntu1) ... 1047s Selecting previously unselected package libatomic1:amd64. 1047s Preparing to unpack .../012-libatomic1_14-20240412-0ubuntu1_amd64.deb ... 1047s Unpacking libatomic1:amd64 (14-20240412-0ubuntu1) ... 1047s Selecting previously unselected package libasan8:amd64. 1047s Preparing to unpack .../013-libasan8_14-20240412-0ubuntu1_amd64.deb ... 1047s Unpacking libasan8:amd64 (14-20240412-0ubuntu1) ... 1047s Selecting previously unselected package liblsan0:amd64. 1047s Preparing to unpack .../014-liblsan0_14-20240412-0ubuntu1_amd64.deb ... 1047s Unpacking liblsan0:amd64 (14-20240412-0ubuntu1) ... 1048s Selecting previously unselected package libtsan2:amd64. 1048s Preparing to unpack .../015-libtsan2_14-20240412-0ubuntu1_amd64.deb ... 1048s Unpacking libtsan2:amd64 (14-20240412-0ubuntu1) ... 1048s Selecting previously unselected package libubsan1:amd64. 1048s Preparing to unpack .../016-libubsan1_14-20240412-0ubuntu1_amd64.deb ... 1048s Unpacking libubsan1:amd64 (14-20240412-0ubuntu1) ... 1048s Selecting previously unselected package libhwasan0:amd64. 1048s Preparing to unpack .../017-libhwasan0_14-20240412-0ubuntu1_amd64.deb ... 1048s Unpacking libhwasan0:amd64 (14-20240412-0ubuntu1) ... 1048s Selecting previously unselected package libquadmath0:amd64. 1048s Preparing to unpack .../018-libquadmath0_14-20240412-0ubuntu1_amd64.deb ... 1048s Unpacking libquadmath0:amd64 (14-20240412-0ubuntu1) ... 1048s Selecting previously unselected package libgcc-13-dev:amd64. 1048s Preparing to unpack .../019-libgcc-13-dev_13.2.0-23ubuntu4_amd64.deb ... 1048s Unpacking libgcc-13-dev:amd64 (13.2.0-23ubuntu4) ... 1048s Selecting previously unselected package gcc-13-x86-64-linux-gnu. 1048s Preparing to unpack .../020-gcc-13-x86-64-linux-gnu_13.2.0-23ubuntu4_amd64.deb ... 1048s Unpacking gcc-13-x86-64-linux-gnu (13.2.0-23ubuntu4) ... 1048s Selecting previously unselected package gcc-13. 1048s Preparing to unpack .../021-gcc-13_13.2.0-23ubuntu4_amd64.deb ... 1048s Unpacking gcc-13 (13.2.0-23ubuntu4) ... 1048s Selecting previously unselected package gcc-x86-64-linux-gnu. 1048s Preparing to unpack .../022-gcc-x86-64-linux-gnu_4%3a13.2.0-7ubuntu1_amd64.deb ... 1048s Unpacking gcc-x86-64-linux-gnu (4:13.2.0-7ubuntu1) ... 1048s Selecting previously unselected package gcc. 1048s Preparing to unpack .../023-gcc_4%3a13.2.0-7ubuntu1_amd64.deb ... 1048s Unpacking gcc (4:13.2.0-7ubuntu1) ... 1048s Selecting previously unselected package libstdc++-13-dev:amd64. 1048s Preparing to unpack .../024-libstdc++-13-dev_13.2.0-23ubuntu4_amd64.deb ... 1048s Unpacking libstdc++-13-dev:amd64 (13.2.0-23ubuntu4) ... 1048s Selecting previously unselected package g++-13-x86-64-linux-gnu. 1048s Preparing to unpack .../025-g++-13-x86-64-linux-gnu_13.2.0-23ubuntu4_amd64.deb ... 1048s Unpacking g++-13-x86-64-linux-gnu (13.2.0-23ubuntu4) ... 1049s Selecting previously unselected package g++-13. 1049s Preparing to unpack .../026-g++-13_13.2.0-23ubuntu4_amd64.deb ... 1049s Unpacking g++-13 (13.2.0-23ubuntu4) ... 1049s Selecting previously unselected package g++-x86-64-linux-gnu. 1049s Preparing to unpack .../027-g++-x86-64-linux-gnu_4%3a13.2.0-7ubuntu1_amd64.deb ... 1049s Unpacking g++-x86-64-linux-gnu (4:13.2.0-7ubuntu1) ... 1049s Selecting previously unselected package g++. 1049s Preparing to unpack .../028-g++_4%3a13.2.0-7ubuntu1_amd64.deb ... 1049s Unpacking g++ (4:13.2.0-7ubuntu1) ... 1049s Selecting previously unselected package build-essential. 1049s Preparing to unpack .../029-build-essential_12.10ubuntu1_amd64.deb ... 1049s Unpacking build-essential (12.10ubuntu1) ... 1049s Selecting previously unselected package fonts-font-awesome. 1049s Preparing to unpack .../030-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 1049s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 1049s Selecting previously unselected package fonts-glyphicons-halflings. 1049s Preparing to unpack .../031-fonts-glyphicons-halflings_1.009~3.4.1+dfsg-3_all.deb ... 1049s Unpacking fonts-glyphicons-halflings (1.009~3.4.1+dfsg-3) ... 1049s Selecting previously unselected package fonts-mathjax. 1049s Preparing to unpack .../032-fonts-mathjax_2.7.9+dfsg-1_all.deb ... 1049s Unpacking fonts-mathjax (2.7.9+dfsg-1) ... 1049s Selecting previously unselected package libbabeltrace1:amd64. 1049s Preparing to unpack .../033-libbabeltrace1_1.5.11-3build3_amd64.deb ... 1049s Unpacking libbabeltrace1:amd64 (1.5.11-3build3) ... 1049s Selecting previously unselected package libdebuginfod1t64:amd64. 1049s Preparing to unpack .../034-libdebuginfod1t64_0.190-1.1build4_amd64.deb ... 1049s Unpacking libdebuginfod1t64:amd64 (0.190-1.1build4) ... 1049s Selecting previously unselected package libipt2. 1049s Preparing to unpack .../035-libipt2_2.0.6-1build1_amd64.deb ... 1049s Unpacking libipt2 (2.0.6-1build1) ... 1049s Selecting previously unselected package libpython3.12t64:amd64. 1049s Preparing to unpack .../036-libpython3.12t64_3.12.3-1_amd64.deb ... 1049s Unpacking libpython3.12t64:amd64 (3.12.3-1) ... 1049s Selecting previously unselected package libsource-highlight-common. 1049s Preparing to unpack .../037-libsource-highlight-common_3.1.9-4.3build1_all.deb ... 1049s Unpacking libsource-highlight-common (3.1.9-4.3build1) ... 1049s Selecting previously unselected package libsource-highlight4t64:amd64. 1049s Preparing to unpack .../038-libsource-highlight4t64_3.1.9-4.3build1_amd64.deb ... 1049s Unpacking libsource-highlight4t64:amd64 (3.1.9-4.3build1) ... 1049s Selecting previously unselected package gdb. 1049s Preparing to unpack .../039-gdb_15.0.50.20240403-0ubuntu1_amd64.deb ... 1049s Unpacking gdb (15.0.50.20240403-0ubuntu1) ... 1049s Selecting previously unselected package python3-platformdirs. 1049s Preparing to unpack .../040-python3-platformdirs_4.2.0-1_all.deb ... 1049s Unpacking python3-platformdirs (4.2.0-1) ... 1049s Selecting previously unselected package python3-traitlets. 1049s Preparing to unpack .../041-python3-traitlets_5.14.3-1_all.deb ... 1049s Unpacking python3-traitlets (5.14.3-1) ... 1049s Selecting previously unselected package python3-jupyter-core. 1049s Preparing to unpack .../042-python3-jupyter-core_5.3.2-1ubuntu1_all.deb ... 1049s Unpacking python3-jupyter-core (5.3.2-1ubuntu1) ... 1049s Selecting previously unselected package jupyter-core. 1049s Preparing to unpack .../043-jupyter-core_5.3.2-1ubuntu1_all.deb ... 1049s Unpacking jupyter-core (5.3.2-1ubuntu1) ... 1049s Selecting previously unselected package libjs-underscore. 1050s Preparing to unpack .../044-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 1050s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 1050s Selecting previously unselected package libjs-backbone. 1050s Preparing to unpack .../045-libjs-backbone_1.4.1~dfsg+~1.4.15-3_all.deb ... 1050s Unpacking libjs-backbone (1.4.1~dfsg+~1.4.15-3) ... 1050s Selecting previously unselected package libjs-bootstrap. 1050s Preparing to unpack .../046-libjs-bootstrap_3.4.1+dfsg-3_all.deb ... 1050s Unpacking libjs-bootstrap (3.4.1+dfsg-3) ... 1050s Selecting previously unselected package libjs-jquery. 1050s Preparing to unpack .../047-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 1050s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 1050s Selecting previously unselected package libjs-bootstrap-tour. 1050s Preparing to unpack .../048-libjs-bootstrap-tour_0.12.0+dfsg-5_all.deb ... 1050s Unpacking libjs-bootstrap-tour (0.12.0+dfsg-5) ... 1050s Selecting previously unselected package libjs-codemirror. 1050s Preparing to unpack .../049-libjs-codemirror_5.65.0+~cs5.83.9-3_all.deb ... 1050s Unpacking libjs-codemirror (5.65.0+~cs5.83.9-3) ... 1050s Selecting previously unselected package libjs-es6-promise. 1050s Preparing to unpack .../050-libjs-es6-promise_4.2.8-12_all.deb ... 1050s Unpacking libjs-es6-promise (4.2.8-12) ... 1050s Selecting previously unselected package node-jed. 1050s Preparing to unpack .../051-node-jed_1.1.1-4_all.deb ... 1050s Unpacking node-jed (1.1.1-4) ... 1050s Selecting previously unselected package libjs-jed. 1050s Preparing to unpack .../052-libjs-jed_1.1.1-4_all.deb ... 1050s Unpacking libjs-jed (1.1.1-4) ... 1050s Selecting previously unselected package libjs-jquery-typeahead. 1050s Preparing to unpack .../053-libjs-jquery-typeahead_2.11.0+dfsg1-3_all.deb ... 1050s Unpacking libjs-jquery-typeahead (2.11.0+dfsg1-3) ... 1050s Selecting previously unselected package libjs-jquery-ui. 1050s Preparing to unpack .../054-libjs-jquery-ui_1.13.2+dfsg-1_all.deb ... 1050s Unpacking libjs-jquery-ui (1.13.2+dfsg-1) ... 1050s Selecting previously unselected package libjs-marked. 1050s Preparing to unpack .../055-libjs-marked_4.2.3+ds+~4.0.7-3_all.deb ... 1050s Unpacking libjs-marked (4.2.3+ds+~4.0.7-3) ... 1050s Selecting previously unselected package libjs-mathjax. 1050s Preparing to unpack .../056-libjs-mathjax_2.7.9+dfsg-1_all.deb ... 1050s Unpacking libjs-mathjax (2.7.9+dfsg-1) ... 1051s Selecting previously unselected package libjs-moment. 1051s Preparing to unpack .../057-libjs-moment_2.29.4+ds-1_all.deb ... 1051s Unpacking libjs-moment (2.29.4+ds-1) ... 1051s Selecting previously unselected package libjs-requirejs. 1051s Preparing to unpack .../058-libjs-requirejs_2.3.6+ds+~2.1.34-2_all.deb ... 1051s Unpacking libjs-requirejs (2.3.6+ds+~2.1.34-2) ... 1051s Selecting previously unselected package libjs-requirejs-text. 1051s Preparing to unpack .../059-libjs-requirejs-text_2.0.12-1.1_all.deb ... 1051s Unpacking libjs-requirejs-text (2.0.12-1.1) ... 1051s Selecting previously unselected package libjs-text-encoding. 1051s Preparing to unpack .../060-libjs-text-encoding_0.7.0-5_all.deb ... 1051s Unpacking libjs-text-encoding (0.7.0-5) ... 1051s Selecting previously unselected package libjs-xterm. 1051s Preparing to unpack .../061-libjs-xterm_5.3.0-2_all.deb ... 1051s Unpacking libjs-xterm (5.3.0-2) ... 1051s Selecting previously unselected package python3-ptyprocess. 1051s Preparing to unpack .../062-python3-ptyprocess_0.7.0-5_all.deb ... 1051s Unpacking python3-ptyprocess (0.7.0-5) ... 1051s Selecting previously unselected package python3-tornado. 1051s Preparing to unpack .../063-python3-tornado_6.4.0-1build1_amd64.deb ... 1051s Unpacking python3-tornado (6.4.0-1build1) ... 1051s Selecting previously unselected package python3-terminado. 1051s Preparing to unpack .../064-python3-terminado_0.17.1-1_all.deb ... 1051s Unpacking python3-terminado (0.17.1-1) ... 1051s Selecting previously unselected package python3-argon2. 1051s Preparing to unpack .../065-python3-argon2_21.1.0-2build1_amd64.deb ... 1051s Unpacking python3-argon2 (21.1.0-2build1) ... 1051s Selecting previously unselected package python3-comm. 1051s Preparing to unpack .../066-python3-comm_0.2.1-1_all.deb ... 1051s Unpacking python3-comm (0.2.1-1) ... 1051s Selecting previously unselected package python3-bytecode. 1051s Preparing to unpack .../067-python3-bytecode_0.15.1-3_all.deb ... 1051s Unpacking python3-bytecode (0.15.1-3) ... 1051s Selecting previously unselected package python3-coverage. 1051s Preparing to unpack .../068-python3-coverage_7.4.4+dfsg1-0ubuntu2_amd64.deb ... 1051s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 1051s Selecting previously unselected package python3-pydevd. 1051s Preparing to unpack .../069-python3-pydevd_2.10.0+ds-10ubuntu1_amd64.deb ... 1051s Unpacking python3-pydevd (2.10.0+ds-10ubuntu1) ... 1051s Selecting previously unselected package python3-debugpy. 1051s Preparing to unpack .../070-python3-debugpy_1.8.0+ds-4ubuntu4_all.deb ... 1051s Unpacking python3-debugpy (1.8.0+ds-4ubuntu4) ... 1051s Selecting previously unselected package python3-decorator. 1051s Preparing to unpack .../071-python3-decorator_5.1.1-5_all.deb ... 1051s Unpacking python3-decorator (5.1.1-5) ... 1051s Selecting previously unselected package python3-parso. 1051s Preparing to unpack .../072-python3-parso_0.8.3-1_all.deb ... 1051s Unpacking python3-parso (0.8.3-1) ... 1051s Selecting previously unselected package python3-typeshed. 1051s Preparing to unpack .../073-python3-typeshed_0.0~git20231111.6764465-3_all.deb ... 1051s Unpacking python3-typeshed (0.0~git20231111.6764465-3) ... 1052s Selecting previously unselected package python3-jedi. 1052s Preparing to unpack .../074-python3-jedi_0.19.1+ds1-1_all.deb ... 1052s Unpacking python3-jedi (0.19.1+ds1-1) ... 1052s Selecting previously unselected package python3-matplotlib-inline. 1052s Preparing to unpack .../075-python3-matplotlib-inline_0.1.6-2_all.deb ... 1052s Unpacking python3-matplotlib-inline (0.1.6-2) ... 1052s Selecting previously unselected package python3-pexpect. 1052s Preparing to unpack .../076-python3-pexpect_4.9-2_all.deb ... 1052s Unpacking python3-pexpect (4.9-2) ... 1052s Selecting previously unselected package python3-wcwidth. 1052s Preparing to unpack .../077-python3-wcwidth_0.2.5+dfsg1-1.1ubuntu1_all.deb ... 1052s Unpacking python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 1052s Selecting previously unselected package python3-prompt-toolkit. 1052s Preparing to unpack .../078-python3-prompt-toolkit_3.0.43-1_all.deb ... 1052s Unpacking python3-prompt-toolkit (3.0.43-1) ... 1052s Selecting previously unselected package python3-asttokens. 1052s Preparing to unpack .../079-python3-asttokens_2.4.1-1_all.deb ... 1052s Unpacking python3-asttokens (2.4.1-1) ... 1052s Selecting previously unselected package python3-executing. 1052s Preparing to unpack .../080-python3-executing_2.0.1-0.1_all.deb ... 1052s Unpacking python3-executing (2.0.1-0.1) ... 1052s Selecting previously unselected package python3-pure-eval. 1052s Preparing to unpack .../081-python3-pure-eval_0.2.2-2_all.deb ... 1052s Unpacking python3-pure-eval (0.2.2-2) ... 1052s Selecting previously unselected package python3-stack-data. 1053s Preparing to unpack .../082-python3-stack-data_0.6.3-1_all.deb ... 1053s Unpacking python3-stack-data (0.6.3-1) ... 1053s Selecting previously unselected package python3-ipython. 1053s Preparing to unpack .../083-python3-ipython_8.20.0-1_all.deb ... 1053s Unpacking python3-ipython (8.20.0-1) ... 1053s Selecting previously unselected package python3-dateutil. 1053s Preparing to unpack .../084-python3-dateutil_2.8.2-3ubuntu1_all.deb ... 1053s Unpacking python3-dateutil (2.8.2-3ubuntu1) ... 1053s Selecting previously unselected package python3-entrypoints. 1053s Preparing to unpack .../085-python3-entrypoints_0.4-2_all.deb ... 1053s Unpacking python3-entrypoints (0.4-2) ... 1053s Selecting previously unselected package python3-nest-asyncio. 1053s Preparing to unpack .../086-python3-nest-asyncio_1.5.4-1_all.deb ... 1053s Unpacking python3-nest-asyncio (1.5.4-1) ... 1053s Selecting previously unselected package python3-py. 1053s Preparing to unpack .../087-python3-py_1.11.0-2_all.deb ... 1053s Unpacking python3-py (1.11.0-2) ... 1053s Selecting previously unselected package libnorm1t64:amd64. 1053s Preparing to unpack .../088-libnorm1t64_1.5.9+dfsg-3.1build1_amd64.deb ... 1053s Unpacking libnorm1t64:amd64 (1.5.9+dfsg-3.1build1) ... 1053s Selecting previously unselected package libpgm-5.3-0t64:amd64. 1053s Preparing to unpack .../089-libpgm-5.3-0t64_5.3.128~dfsg-2.1build1_amd64.deb ... 1053s Unpacking libpgm-5.3-0t64:amd64 (5.3.128~dfsg-2.1build1) ... 1053s Selecting previously unselected package libsodium23:amd64. 1053s Preparing to unpack .../090-libsodium23_1.0.18-1build3_amd64.deb ... 1053s Unpacking libsodium23:amd64 (1.0.18-1build3) ... 1053s Selecting previously unselected package libzmq5:amd64. 1053s Preparing to unpack .../091-libzmq5_4.3.5-1build2_amd64.deb ... 1053s Unpacking libzmq5:amd64 (4.3.5-1build2) ... 1053s Selecting previously unselected package python3-zmq. 1053s Preparing to unpack .../092-python3-zmq_24.0.1-5build1_amd64.deb ... 1053s Unpacking python3-zmq (24.0.1-5build1) ... 1053s Selecting previously unselected package python3-jupyter-client. 1053s Preparing to unpack .../093-python3-jupyter-client_7.4.9-2ubuntu1_all.deb ... 1053s Unpacking python3-jupyter-client (7.4.9-2ubuntu1) ... 1053s Selecting previously unselected package python3-packaging. 1053s Preparing to unpack .../094-python3-packaging_24.0-1_all.deb ... 1053s Unpacking python3-packaging (24.0-1) ... 1053s Selecting previously unselected package python3-psutil. 1053s Preparing to unpack .../095-python3-psutil_5.9.8-2build2_amd64.deb ... 1053s Unpacking python3-psutil (5.9.8-2build2) ... 1053s Selecting previously unselected package python3-ipykernel. 1053s Preparing to unpack .../096-python3-ipykernel_6.29.3-1_all.deb ... 1053s Unpacking python3-ipykernel (6.29.3-1) ... 1053s Selecting previously unselected package python3-ipython-genutils. 1053s Preparing to unpack .../097-python3-ipython-genutils_0.2.0-6_all.deb ... 1053s Unpacking python3-ipython-genutils (0.2.0-6) ... 1053s Selecting previously unselected package python3-webencodings. 1053s Preparing to unpack .../098-python3-webencodings_0.5.1-5_all.deb ... 1053s Unpacking python3-webencodings (0.5.1-5) ... 1053s Selecting previously unselected package python3-html5lib. 1053s Preparing to unpack .../099-python3-html5lib_1.1-6_all.deb ... 1053s Unpacking python3-html5lib (1.1-6) ... 1053s Selecting previously unselected package python3-bleach. 1053s Preparing to unpack .../100-python3-bleach_6.1.0-2_all.deb ... 1053s Unpacking python3-bleach (6.1.0-2) ... 1053s Selecting previously unselected package python3-soupsieve. 1053s Preparing to unpack .../101-python3-soupsieve_2.5-1_all.deb ... 1053s Unpacking python3-soupsieve (2.5-1) ... 1053s Selecting previously unselected package python3-bs4. 1053s Preparing to unpack .../102-python3-bs4_4.12.3-1_all.deb ... 1053s Unpacking python3-bs4 (4.12.3-1) ... 1053s Selecting previously unselected package python3-defusedxml. 1053s Preparing to unpack .../103-python3-defusedxml_0.7.1-2_all.deb ... 1053s Unpacking python3-defusedxml (0.7.1-2) ... 1053s Selecting previously unselected package python3-jupyterlab-pygments. 1053s Preparing to unpack .../104-python3-jupyterlab-pygments_0.2.2-3_all.deb ... 1053s Unpacking python3-jupyterlab-pygments (0.2.2-3) ... 1053s Selecting previously unselected package libxslt1.1:amd64. 1053s Preparing to unpack .../105-libxslt1.1_1.1.39-0exp1build1_amd64.deb ... 1053s Unpacking libxslt1.1:amd64 (1.1.39-0exp1build1) ... 1053s Selecting previously unselected package python3-lxml:amd64. 1053s Preparing to unpack .../106-python3-lxml_5.2.1-1_amd64.deb ... 1053s Unpacking python3-lxml:amd64 (5.2.1-1) ... 1053s Selecting previously unselected package python3-fastjsonschema. 1053s Preparing to unpack .../107-python3-fastjsonschema_2.19.0-1_all.deb ... 1053s Unpacking python3-fastjsonschema (2.19.0-1) ... 1053s Selecting previously unselected package python3-nbformat. 1053s Preparing to unpack .../108-python3-nbformat_5.9.1-1_all.deb ... 1053s Unpacking python3-nbformat (5.9.1-1) ... 1053s Selecting previously unselected package python3-nbclient. 1053s Preparing to unpack .../109-python3-nbclient_0.8.0-1_all.deb ... 1053s Unpacking python3-nbclient (0.8.0-1) ... 1054s Selecting previously unselected package python3-pandocfilters. 1054s Preparing to unpack .../110-python3-pandocfilters_1.5.1-1_all.deb ... 1054s Unpacking python3-pandocfilters (1.5.1-1) ... 1054s Selecting previously unselected package python-tinycss2-common. 1054s Preparing to unpack .../111-python-tinycss2-common_1.2.1-2_all.deb ... 1054s Unpacking python-tinycss2-common (1.2.1-2) ... 1054s Selecting previously unselected package python3-tinycss2. 1054s Preparing to unpack .../112-python3-tinycss2_1.2.1-2_all.deb ... 1054s Unpacking python3-tinycss2 (1.2.1-2) ... 1054s Selecting previously unselected package python3-lxml-html-clean. 1054s Preparing to unpack .../113-python3-lxml-html-clean_0.1.1-1_all.deb ... 1054s Unpacking python3-lxml-html-clean (0.1.1-1) ... 1054s Selecting previously unselected package python3-nbconvert. 1054s Preparing to unpack .../114-python3-nbconvert_6.5.3-5_all.deb ... 1054s Unpacking python3-nbconvert (6.5.3-5) ... 1054s Selecting previously unselected package python3-prometheus-client. 1054s Preparing to unpack .../115-python3-prometheus-client_0.19.0+ds1-1_all.deb ... 1054s Unpacking python3-prometheus-client (0.19.0+ds1-1) ... 1054s Selecting previously unselected package python3-send2trash. 1054s Preparing to unpack .../116-python3-send2trash_1.8.2-1_all.deb ... 1054s Unpacking python3-send2trash (1.8.2-1) ... 1054s Selecting previously unselected package python3-notebook. 1054s Preparing to unpack .../117-python3-notebook_6.4.12-2.2ubuntu1_all.deb ... 1054s Unpacking python3-notebook (6.4.12-2.2ubuntu1) ... 1054s Selecting previously unselected package jupyter-notebook. 1054s Preparing to unpack .../118-jupyter-notebook_6.4.12-2.2ubuntu1_all.deb ... 1054s Unpacking jupyter-notebook (6.4.12-2.2ubuntu1) ... 1054s Selecting previously unselected package libjs-sphinxdoc. 1054s Preparing to unpack .../119-libjs-sphinxdoc_7.2.6-6_all.deb ... 1054s Unpacking libjs-sphinxdoc (7.2.6-6) ... 1054s Selecting previously unselected package sphinx-rtd-theme-common. 1054s Preparing to unpack .../120-sphinx-rtd-theme-common_2.0.0+dfsg-1_all.deb ... 1054s Unpacking sphinx-rtd-theme-common (2.0.0+dfsg-1) ... 1054s Selecting previously unselected package python-notebook-doc. 1054s Preparing to unpack .../121-python-notebook-doc_6.4.12-2.2ubuntu1_all.deb ... 1054s Unpacking python-notebook-doc (6.4.12-2.2ubuntu1) ... 1054s Setting up python3-entrypoints (0.4-2) ... 1054s Setting up libjs-jquery-typeahead (2.11.0+dfsg1-3) ... 1054s Setting up python3-tornado (6.4.0-1build1) ... 1055s Setting up libnorm1t64:amd64 (1.5.9+dfsg-3.1build1) ... 1055s Setting up python3-pure-eval (0.2.2-2) ... 1055s Setting up python3-send2trash (1.8.2-1) ... 1055s Setting up fonts-lato (2.015-1) ... 1055s Setting up fonts-mathjax (2.7.9+dfsg-1) ... 1055s Setting up libsodium23:amd64 (1.0.18-1build3) ... 1055s Setting up libjs-mathjax (2.7.9+dfsg-1) ... 1055s Setting up python3-py (1.11.0-2) ... 1055s Setting up libdebuginfod-common (0.190-1.1build4) ... 1055s Setting up libjs-requirejs-text (2.0.12-1.1) ... 1055s Setting up python3-parso (0.8.3-1) ... 1055s Setting up python3-defusedxml (0.7.1-2) ... 1055s Setting up python3-ipython-genutils (0.2.0-6) ... 1055s Setting up python3-asttokens (2.4.1-1) ... 1056s Setting up fonts-glyphicons-halflings (1.009~3.4.1+dfsg-3) ... 1056s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 1056s Setting up libjs-moment (2.29.4+ds-1) ... 1056s Setting up python3-pandocfilters (1.5.1-1) ... 1056s Setting up libgomp1:amd64 (14-20240412-0ubuntu1) ... 1056s Setting up libjs-requirejs (2.3.6+ds+~2.1.34-2) ... 1056s Setting up libjs-es6-promise (4.2.8-12) ... 1056s Setting up libjs-text-encoding (0.7.0-5) ... 1056s Setting up python3-webencodings (0.5.1-5) ... 1056s Setting up python3-platformdirs (4.2.0-1) ... 1056s Setting up python3-psutil (5.9.8-2build2) ... 1056s Setting up libsource-highlight-common (3.1.9-4.3build1) ... 1056s Setting up python3-jupyterlab-pygments (0.2.2-3) ... 1056s Setting up libpython3.12t64:amd64 (3.12.3-1) ... 1056s Setting up libpgm-5.3-0t64:amd64 (5.3.128~dfsg-2.1build1) ... 1057s Setting up python3-decorator (5.1.1-5) ... 1057s Setting up python3-packaging (24.0-1) ... 1057s Setting up gcc-13-base:amd64 (13.2.0-23ubuntu4) ... 1057s Setting up python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 1057s Setting up node-jed (1.1.1-4) ... 1057s Setting up python3-typeshed (0.0~git20231111.6764465-3) ... 1057s Setting up python3-executing (2.0.1-0.1) ... 1057s Setting up libjs-xterm (5.3.0-2) ... 1057s Setting up python3-nest-asyncio (1.5.4-1) ... 1057s Setting up libquadmath0:amd64 (14-20240412-0ubuntu1) ... 1057s Setting up python3-bytecode (0.15.1-3) ... 1057s Setting up libjs-codemirror (5.65.0+~cs5.83.9-3) ... 1057s Setting up libmpc3:amd64 (1.3.1-1build1) ... 1057s Setting up libatomic1:amd64 (14-20240412-0ubuntu1) ... 1057s Setting up libjs-jed (1.1.1-4) ... 1057s Setting up libipt2 (2.0.6-1build1) ... 1057s Setting up python3-html5lib (1.1-6) ... 1057s Setting up libbabeltrace1:amd64 (1.5.11-3build3) ... 1057s Setting up libubsan1:amd64 (14-20240412-0ubuntu1) ... 1057s Setting up python3-fastjsonschema (2.19.0-1) ... 1058s Setting up libhwasan0:amd64 (14-20240412-0ubuntu1) ... 1058s Setting up python3-traitlets (5.14.3-1) ... 1058s Setting up libasan8:amd64 (14-20240412-0ubuntu1) ... 1058s Setting up python-tinycss2-common (1.2.1-2) ... 1058s Setting up libxslt1.1:amd64 (1.1.39-0exp1build1) ... 1058s Setting up python3-argon2 (21.1.0-2build1) ... 1058s Setting up python3-dateutil (2.8.2-3ubuntu1) ... 1058s Setting up libtsan2:amd64 (14-20240412-0ubuntu1) ... 1058s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 1058s Setting up libisl23:amd64 (0.26-3build1) ... 1058s Setting up python3-stack-data (0.6.3-1) ... 1058s Setting up python3-soupsieve (2.5-1) ... 1058s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 1058s Setting up sphinx-rtd-theme-common (2.0.0+dfsg-1) ... 1058s Setting up libcc1-0:amd64 (14-20240412-0ubuntu1) ... 1058s Setting up python3-jupyter-core (5.3.2-1ubuntu1) ... 1058s Setting up liblsan0:amd64 (14-20240412-0ubuntu1) ... 1058s Setting up libjs-bootstrap (3.4.1+dfsg-3) ... 1058s Setting up libitm1:amd64 (14-20240412-0ubuntu1) ... 1058s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 1058s Setting up python3-ptyprocess (0.7.0-5) ... 1058s Setting up libjs-marked (4.2.3+ds+~4.0.7-3) ... 1058s Setting up python3-prompt-toolkit (3.0.43-1) ... 1059s Setting up libdebuginfod1t64:amd64 (0.190-1.1build4) ... 1059s Setting up python3-tinycss2 (1.2.1-2) ... 1059s Setting up libzmq5:amd64 (4.3.5-1build2) ... 1059s Setting up python3-jedi (0.19.1+ds1-1) ... 1059s Setting up cpp-13-x86-64-linux-gnu (13.2.0-23ubuntu4) ... 1059s Setting up libjs-bootstrap-tour (0.12.0+dfsg-5) ... 1059s Setting up libjs-backbone (1.4.1~dfsg+~1.4.15-3) ... 1059s Setting up libsource-highlight4t64:amd64 (3.1.9-4.3build1) ... 1059s Setting up python3-nbformat (5.9.1-1) ... 1059s Setting up python3-bs4 (4.12.3-1) ... 1059s Setting up python3-bleach (6.1.0-2) ... 1060s Setting up python3-matplotlib-inline (0.1.6-2) ... 1060s Setting up python3-comm (0.2.1-1) ... 1060s Setting up python3-prometheus-client (0.19.0+ds1-1) ... 1060s Setting up gdb (15.0.50.20240403-0ubuntu1) ... 1060s Setting up libjs-jquery-ui (1.13.2+dfsg-1) ... 1060s Setting up python3-pexpect (4.9-2) ... 1060s Setting up python3-zmq (24.0.1-5build1) ... 1060s Setting up libjs-sphinxdoc (7.2.6-6) ... 1060s Setting up python3-terminado (0.17.1-1) ... 1060s Setting up libgcc-13-dev:amd64 (13.2.0-23ubuntu4) ... 1060s Setting up python3-lxml:amd64 (5.2.1-1) ... 1061s Setting up python3-jupyter-client (7.4.9-2ubuntu1) ... 1061s Setting up jupyter-core (5.3.2-1ubuntu1) ... 1061s Setting up python3-pydevd (2.10.0+ds-10ubuntu1) ... 1061s Setting up libstdc++-13-dev:amd64 (13.2.0-23ubuntu4) ... 1061s Setting up cpp-x86-64-linux-gnu (4:13.2.0-7ubuntu1) ... 1061s Setting up cpp-13 (13.2.0-23ubuntu4) ... 1061s Setting up gcc-13-x86-64-linux-gnu (13.2.0-23ubuntu4) ... 1061s Setting up python3-debugpy (1.8.0+ds-4ubuntu4) ... 1061s Setting up python-notebook-doc (6.4.12-2.2ubuntu1) ... 1061s Setting up python3-nbclient (0.8.0-1) ... 1062s Setting up python3-ipython (8.20.0-1) ... 1062s Setting up python3-ipykernel (6.29.3-1) ... 1062s Setting up gcc-13 (13.2.0-23ubuntu4) ... 1062s Setting up python3-lxml-html-clean (0.1.1-1) ... 1062s Setting up python3-nbconvert (6.5.3-5) ... 1063s Setting up cpp (4:13.2.0-7ubuntu1) ... 1063s Setting up g++-13-x86-64-linux-gnu (13.2.0-23ubuntu4) ... 1063s Setting up gcc-x86-64-linux-gnu (4:13.2.0-7ubuntu1) ... 1063s Setting up python3-notebook (6.4.12-2.2ubuntu1) ... 1063s Setting up gcc (4:13.2.0-7ubuntu1) ... 1063s Setting up g++-x86-64-linux-gnu (4:13.2.0-7ubuntu1) ... 1063s Setting up g++-13 (13.2.0-23ubuntu4) ... 1063s Setting up jupyter-notebook (6.4.12-2.2ubuntu1) ... 1063s Setting up g++ (4:13.2.0-7ubuntu1) ... 1063s update-alternatives: using /usr/bin/g++ to provide /usr/bin/c++ (c++) in auto mode 1063s Setting up build-essential (12.10ubuntu1) ... 1063s Processing triggers for man-db (2.12.0-4build2) ... 1064s Processing triggers for libc-bin (2.39-0ubuntu8) ... 1064s Reading package lists... 1065s Building dependency tree... 1065s Reading state information... 1065s Starting pkgProblemResolver with broken count: 0 1065s Starting 2 pkgProblemResolver with broken count: 0 1065s Done 1065s The following NEW packages will be installed: 1065s autopkgtest-satdep 1065s 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 1065s Need to get 0 B/696 B of archives. 1065s After this operation, 0 B of additional disk space will be used. 1065s Get:1 /tmp/autopkgtest.FMSSaJ/4-autopkgtest-satdep.deb autopkgtest-satdep amd64 0 [696 B] 1066s Selecting previously unselected package autopkgtest-satdep. 1066s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 91697 files and directories currently installed.) 1066s Preparing to unpack .../4-autopkgtest-satdep.deb ... 1066s Unpacking autopkgtest-satdep (0) ... 1066s Setting up autopkgtest-satdep (0) ... 1067s (Reading database ... 91697 files and directories currently installed.) 1067s Removing autopkgtest-satdep (0) ... 1068s autopkgtest [23:27:16]: test command1: find /usr/lib/python3/dist-packages/notebook -xtype l >&2 1068s autopkgtest [23:27:16]: test command1: [----------------------- 1069s autopkgtest [23:27:17]: test command1: -----------------------] 1069s autopkgtest [23:27:17]: test command1: - - - - - - - - - - results - - - - - - - - - - 1069s command1 PASS (superficial) 1069s autopkgtest [23:27:17]: test autodep8-python3: preparing testbed 1457s autopkgtest [23:33:45]: testbed dpkg architecture: amd64 1457s autopkgtest [23:33:45]: testbed apt version: 2.7.14build2 1457s autopkgtest [23:33:45]: test architecture: i386 1457s autopkgtest [23:33:45]: @@@@@@@@@@@@@@@@@@@@ test bed setup 1458s Get:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease [73.9 kB] 1458s Get:2 http://ftpmaster.internal/ubuntu oracular-proposed/main Sources [128 kB] 1458s Get:3 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse Sources [17.6 kB] 1458s Get:4 http://ftpmaster.internal/ubuntu oracular-proposed/restricted Sources [1964 B] 1458s Get:5 http://ftpmaster.internal/ubuntu oracular-proposed/universe Sources [1145 kB] 1458s Get:6 http://ftpmaster.internal/ubuntu oracular-proposed/main i386 Packages [171 kB] 1458s Get:7 http://ftpmaster.internal/ubuntu oracular-proposed/main amd64 Packages [215 kB] 1458s Get:8 http://ftpmaster.internal/ubuntu oracular-proposed/restricted amd64 Packages [7700 B] 1458s Get:9 http://ftpmaster.internal/ubuntu oracular-proposed/universe amd64 Packages [1033 kB] 1458s Get:10 http://ftpmaster.internal/ubuntu oracular-proposed/universe i386 Packages [523 kB] 1458s Get:11 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse amd64 Packages [53.1 kB] 1458s Get:12 http://ftpmaster.internal/ubuntu oracular-proposed/multiverse i386 Packages [19.0 kB] 1458s Fetched 3388 kB in 1s (5766 kB/s) 1458s Reading package lists... 1460s Reading package lists... 1460s Building dependency tree... 1460s Reading state information... 1460s Calculating upgrade... 1460s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1461s Reading package lists... 1461s Building dependency tree... 1461s Reading state information... 1461s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1461s Hit:1 http://ftpmaster.internal/ubuntu oracular-proposed InRelease 1461s Hit:2 http://ftpmaster.internal/ubuntu oracular InRelease 1461s Hit:3 http://ftpmaster.internal/ubuntu oracular-updates InRelease 1461s Hit:4 http://ftpmaster.internal/ubuntu oracular-security InRelease 1462s Reading package lists... 1463s Reading package lists... 1463s Building dependency tree... 1463s Reading state information... 1463s Calculating upgrade... 1463s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1463s Reading package lists... 1463s Building dependency tree... 1463s Reading state information... 1464s 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1467s Note, using file '/tmp/autopkgtest.FMSSaJ/5-autopkgtest-satdep.dsc' to get the build dependencies 1467s Reading package lists... 1467s Building dependency tree... 1467s Reading state information... 1468s Starting pkgProblemResolver with broken count: 0 1468s Starting 2 pkgProblemResolver with broken count: 0 1468s Done 1468s The following NEW packages will be installed: 1468s build-essential cpp cpp-13 cpp-13-x86-64-linux-gnu cpp-x86-64-linux-gnu 1468s fonts-font-awesome fonts-glyphicons-halflings fonts-mathjax g++ g++-13 1468s g++-13-x86-64-linux-gnu g++-x86-64-linux-gnu gcc gcc-13 gcc-13-base 1468s gcc-13-x86-64-linux-gnu gcc-x86-64-linux-gnu gdb libasan8 libatomic1 1468s libbabeltrace1 libcc1-0 libdebuginfod-common libdebuginfod1t64 libgcc-13-dev 1468s libgomp1 libhwasan0 libipt2 libisl23 libitm1 libjs-backbone libjs-bootstrap 1468s libjs-bootstrap-tour libjs-codemirror libjs-es6-promise libjs-jed 1468s libjs-jquery libjs-jquery-typeahead libjs-jquery-ui libjs-marked 1468s libjs-mathjax libjs-moment libjs-requirejs libjs-requirejs-text 1468s libjs-text-encoding libjs-underscore libjs-xterm liblsan0 libmpc3 1468s libnorm1t64 libpgm-5.3-0t64 libpython3.12t64 libquadmath0 libsodium23 1468s libsource-highlight-common libsource-highlight4t64 libstdc++-13-dev libtsan2 1468s libubsan1 libxslt1.1 libzmq5 node-jed python-tinycss2-common python3-all 1468s python3-argon2 python3-asttokens python3-bleach python3-bs4 python3-bytecode 1468s python3-comm python3-coverage python3-dateutil python3-debugpy 1468s python3-decorator python3-defusedxml python3-entrypoints python3-executing 1468s python3-fastjsonschema python3-html5lib python3-ipykernel python3-ipython 1468s python3-ipython-genutils python3-jedi python3-jupyter-client 1468s python3-jupyter-core python3-jupyterlab-pygments python3-lxml 1468s python3-lxml-html-clean python3-matplotlib-inline python3-nbclient 1468s python3-nbconvert python3-nbformat python3-nest-asyncio python3-notebook 1468s python3-packaging python3-pandocfilters python3-parso python3-pexpect 1468s python3-platformdirs python3-prometheus-client python3-prompt-toolkit 1468s python3-psutil python3-ptyprocess python3-pure-eval python3-py 1468s python3-pydevd python3-send2trash python3-soupsieve python3-stack-data 1468s python3-terminado python3-tinycss2 python3-tornado python3-traitlets 1468s python3-typeshed python3-wcwidth python3-webencodings python3-zmq 1468s 0 upgraded, 117 newly installed, 0 to remove and 0 not upgraded. 1468s Need to get 90.3 MB of archives. 1468s After this operation, 377 MB of additional disk space will be used. 1468s Get:1 http://ftpmaster.internal/ubuntu oracular/main amd64 libdebuginfod-common all 0.190-1.1build4 [14.2 kB] 1469s Get:2 http://ftpmaster.internal/ubuntu oracular/main amd64 gcc-13-base amd64 13.2.0-23ubuntu4 [49.0 kB] 1469s Get:3 http://ftpmaster.internal/ubuntu oracular/main amd64 libisl23 amd64 0.26-3build1 [680 kB] 1469s Get:4 http://ftpmaster.internal/ubuntu oracular/main amd64 libmpc3 amd64 1.3.1-1build1 [54.5 kB] 1469s Get:5 http://ftpmaster.internal/ubuntu oracular/main amd64 cpp-13-x86-64-linux-gnu amd64 13.2.0-23ubuntu4 [11.2 MB] 1469s Get:6 http://ftpmaster.internal/ubuntu oracular/main amd64 cpp-13 amd64 13.2.0-23ubuntu4 [1032 B] 1469s Get:7 http://ftpmaster.internal/ubuntu oracular/main amd64 cpp-x86-64-linux-gnu amd64 4:13.2.0-7ubuntu1 [5326 B] 1469s Get:8 http://ftpmaster.internal/ubuntu oracular/main amd64 cpp amd64 4:13.2.0-7ubuntu1 [22.4 kB] 1469s Get:9 http://ftpmaster.internal/ubuntu oracular/main amd64 libcc1-0 amd64 14-20240412-0ubuntu1 [47.7 kB] 1469s Get:10 http://ftpmaster.internal/ubuntu oracular/main amd64 libgomp1 amd64 14-20240412-0ubuntu1 [147 kB] 1469s Get:11 http://ftpmaster.internal/ubuntu oracular/main amd64 libitm1 amd64 14-20240412-0ubuntu1 [28.9 kB] 1469s Get:12 http://ftpmaster.internal/ubuntu oracular/main amd64 libatomic1 amd64 14-20240412-0ubuntu1 [10.4 kB] 1469s Get:13 http://ftpmaster.internal/ubuntu oracular/main amd64 libasan8 amd64 14-20240412-0ubuntu1 [3024 kB] 1469s Get:14 http://ftpmaster.internal/ubuntu oracular/main amd64 liblsan0 amd64 14-20240412-0ubuntu1 [1313 kB] 1469s Get:15 http://ftpmaster.internal/ubuntu oracular/main amd64 libtsan2 amd64 14-20240412-0ubuntu1 [2736 kB] 1469s Get:16 http://ftpmaster.internal/ubuntu oracular/main amd64 libubsan1 amd64 14-20240412-0ubuntu1 [1175 kB] 1469s Get:17 http://ftpmaster.internal/ubuntu oracular/main amd64 libhwasan0 amd64 14-20240412-0ubuntu1 [1632 kB] 1469s Get:18 http://ftpmaster.internal/ubuntu oracular/main amd64 libquadmath0 amd64 14-20240412-0ubuntu1 [153 kB] 1469s Get:19 http://ftpmaster.internal/ubuntu oracular/main amd64 libgcc-13-dev amd64 13.2.0-23ubuntu4 [2688 kB] 1469s Get:20 http://ftpmaster.internal/ubuntu oracular/main amd64 gcc-13-x86-64-linux-gnu amd64 13.2.0-23ubuntu4 [21.9 MB] 1469s Get:21 http://ftpmaster.internal/ubuntu oracular/main amd64 gcc-13 amd64 13.2.0-23ubuntu4 [482 kB] 1469s Get:22 http://ftpmaster.internal/ubuntu oracular/main amd64 gcc-x86-64-linux-gnu amd64 4:13.2.0-7ubuntu1 [1212 B] 1469s Get:23 http://ftpmaster.internal/ubuntu oracular/main amd64 gcc amd64 4:13.2.0-7ubuntu1 [5018 B] 1469s Get:24 http://ftpmaster.internal/ubuntu oracular/main amd64 libstdc++-13-dev amd64 13.2.0-23ubuntu4 [2399 kB] 1469s Get:25 http://ftpmaster.internal/ubuntu oracular/main amd64 g++-13-x86-64-linux-gnu amd64 13.2.0-23ubuntu4 [12.5 MB] 1469s Get:26 http://ftpmaster.internal/ubuntu oracular/main amd64 g++-13 amd64 13.2.0-23ubuntu4 [14.5 kB] 1469s Get:27 http://ftpmaster.internal/ubuntu oracular/main amd64 g++-x86-64-linux-gnu amd64 4:13.2.0-7ubuntu1 [964 B] 1469s Get:28 http://ftpmaster.internal/ubuntu oracular/main amd64 g++ amd64 4:13.2.0-7ubuntu1 [1100 B] 1469s Get:29 http://ftpmaster.internal/ubuntu oracular/main amd64 build-essential amd64 12.10ubuntu1 [4928 B] 1469s Get:30 http://ftpmaster.internal/ubuntu oracular/main amd64 fonts-font-awesome all 5.0.10+really4.7.0~dfsg-4.1 [516 kB] 1469s Get:31 http://ftpmaster.internal/ubuntu oracular/universe amd64 fonts-glyphicons-halflings all 1.009~3.4.1+dfsg-3 [118 kB] 1469s Get:32 http://ftpmaster.internal/ubuntu oracular/main amd64 fonts-mathjax all 2.7.9+dfsg-1 [2208 kB] 1469s Get:33 http://ftpmaster.internal/ubuntu oracular/main amd64 libbabeltrace1 amd64 1.5.11-3build3 [164 kB] 1469s Get:34 http://ftpmaster.internal/ubuntu oracular/main amd64 libdebuginfod1t64 amd64 0.190-1.1build4 [17.1 kB] 1469s Get:35 http://ftpmaster.internal/ubuntu oracular/main amd64 libipt2 amd64 2.0.6-1build1 [45.7 kB] 1469s Get:36 http://ftpmaster.internal/ubuntu oracular/main amd64 libpython3.12t64 amd64 3.12.3-1 [2339 kB] 1469s Get:37 http://ftpmaster.internal/ubuntu oracular/main amd64 libsource-highlight-common all 3.1.9-4.3build1 [64.2 kB] 1469s Get:38 http://ftpmaster.internal/ubuntu oracular/main amd64 libsource-highlight4t64 amd64 3.1.9-4.3build1 [258 kB] 1469s Get:39 http://ftpmaster.internal/ubuntu oracular/main amd64 gdb amd64 15.0.50.20240403-0ubuntu1 [4010 kB] 1469s Get:40 http://ftpmaster.internal/ubuntu oracular/main amd64 libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [118 kB] 1469s Get:41 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-backbone all 1.4.1~dfsg+~1.4.15-3 [185 kB] 1469s Get:42 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-bootstrap all 3.4.1+dfsg-3 [129 kB] 1469s Get:43 http://ftpmaster.internal/ubuntu oracular/main amd64 libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [328 kB] 1469s Get:44 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-bootstrap-tour all 0.12.0+dfsg-5 [21.4 kB] 1469s Get:45 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-es6-promise all 4.2.8-12 [14.1 kB] 1469s Get:46 http://ftpmaster.internal/ubuntu oracular/universe amd64 node-jed all 1.1.1-4 [15.2 kB] 1469s Get:47 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-jed all 1.1.1-4 [2584 B] 1469s Get:48 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-jquery-typeahead all 2.11.0+dfsg1-3 [48.9 kB] 1469s Get:49 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-jquery-ui all 1.13.2+dfsg-1 [252 kB] 1469s Get:50 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-moment all 2.29.4+ds-1 [147 kB] 1469s Get:51 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-text-encoding all 0.7.0-5 [140 kB] 1469s Get:52 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-xterm all 5.3.0-2 [476 kB] 1469s Get:53 http://ftpmaster.internal/ubuntu oracular/universe amd64 libnorm1t64 amd64 1.5.9+dfsg-3.1build1 [154 kB] 1469s Get:54 http://ftpmaster.internal/ubuntu oracular/universe amd64 libpgm-5.3-0t64 amd64 5.3.128~dfsg-2.1build1 [167 kB] 1469s Get:55 http://ftpmaster.internal/ubuntu oracular/main amd64 libsodium23 amd64 1.0.18-1build3 [161 kB] 1469s Get:56 http://ftpmaster.internal/ubuntu oracular/main amd64 libxslt1.1 amd64 1.1.39-0exp1build1 [167 kB] 1469s Get:57 http://ftpmaster.internal/ubuntu oracular/universe amd64 libzmq5 amd64 4.3.5-1build2 [260 kB] 1469s Get:58 http://ftpmaster.internal/ubuntu oracular/universe amd64 python-tinycss2-common all 1.2.1-2 [33.9 kB] 1469s Get:59 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-all amd64 3.12.3-0ubuntu1 [888 B] 1469s Get:60 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-argon2 amd64 21.1.0-2build1 [21.0 kB] 1469s Get:61 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-asttokens all 2.4.1-1 [20.9 kB] 1469s Get:62 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-webencodings all 0.5.1-5 [11.5 kB] 1469s Get:63 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-html5lib all 1.1-6 [88.8 kB] 1469s Get:64 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-bleach all 6.1.0-2 [49.6 kB] 1469s Get:65 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-soupsieve all 2.5-1 [33.0 kB] 1469s Get:66 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-bs4 all 4.12.3-1 [109 kB] 1469s Get:67 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-bytecode all 0.15.1-3 [44.7 kB] 1469s Get:68 http://ftpmaster.internal/ubuntu oracular-proposed/universe amd64 python3-traitlets all 5.14.3-1 [71.3 kB] 1469s Get:69 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-comm all 0.2.1-1 [7016 B] 1469s Get:70 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-coverage amd64 7.4.4+dfsg1-0ubuntu2 [147 kB] 1469s Get:71 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-dateutil all 2.8.2-3ubuntu1 [79.4 kB] 1469s Get:72 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-pydevd amd64 2.10.0+ds-10ubuntu1 [637 kB] 1469s Get:73 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-debugpy all 1.8.0+ds-4ubuntu4 [67.6 kB] 1469s Get:74 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-decorator all 5.1.1-5 [10.1 kB] 1469s Get:75 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-defusedxml all 0.7.1-2 [42.0 kB] 1469s Get:76 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-entrypoints all 0.4-2 [7146 B] 1469s Get:77 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-executing all 2.0.1-0.1 [23.3 kB] 1469s Get:78 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-fastjsonschema all 2.19.0-1 [19.6 kB] 1469s Get:79 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-parso all 0.8.3-1 [67.2 kB] 1469s Get:80 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-typeshed all 0.0~git20231111.6764465-3 [1274 kB] 1469s Get:81 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-jedi all 0.19.1+ds1-1 [693 kB] 1469s Get:82 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-matplotlib-inline all 0.1.6-2 [8784 B] 1469s Get:83 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-ptyprocess all 0.7.0-5 [15.1 kB] 1469s Get:84 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-pexpect all 4.9-2 [48.1 kB] 1469s Get:85 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-wcwidth all 0.2.5+dfsg1-1.1ubuntu1 [22.5 kB] 1469s Get:86 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-prompt-toolkit all 3.0.43-1 [256 kB] 1469s Get:87 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-pure-eval all 0.2.2-2 [11.1 kB] 1469s Get:88 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-stack-data all 0.6.3-1 [22.0 kB] 1469s Get:89 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-ipython all 8.20.0-1 [561 kB] 1469s Get:90 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-platformdirs all 4.2.0-1 [16.1 kB] 1469s Get:91 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-jupyter-core all 5.3.2-1ubuntu1 [25.5 kB] 1469s Get:92 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-nest-asyncio all 1.5.4-1 [6256 B] 1469s Get:93 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-tornado amd64 6.4.0-1build1 [297 kB] 1469s Get:94 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-py all 1.11.0-2 [72.7 kB] 1469s Get:95 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-zmq amd64 24.0.1-5build1 [286 kB] 1469s Get:96 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-jupyter-client all 7.4.9-2ubuntu1 [90.5 kB] 1469s Get:97 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-packaging all 24.0-1 [41.1 kB] 1469s Get:98 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-psutil amd64 5.9.8-2build2 [195 kB] 1469s Get:99 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-ipykernel all 6.29.3-1 [82.4 kB] 1469s Get:100 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-ipython-genutils all 0.2.0-6 [22.0 kB] 1469s Get:101 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-jupyterlab-pygments all 0.2.2-3 [6054 B] 1469s Get:102 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-lxml amd64 5.2.1-1 [1243 kB] 1469s Get:103 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-lxml-html-clean all 0.1.1-1 [12.0 kB] 1469s Get:104 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-nbformat all 5.9.1-1 [41.2 kB] 1469s Get:105 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-nbclient all 0.8.0-1 [55.6 kB] 1469s Get:106 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-pandocfilters all 1.5.1-1 [23.6 kB] 1469s Get:107 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-tinycss2 all 1.2.1-2 [19.6 kB] 1469s Get:108 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-nbconvert all 6.5.3-5 [152 kB] 1469s Get:109 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-codemirror all 5.65.0+~cs5.83.9-3 [755 kB] 1469s Get:110 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-marked all 4.2.3+ds+~4.0.7-3 [36.2 kB] 1469s Get:111 http://ftpmaster.internal/ubuntu oracular/main amd64 libjs-mathjax all 2.7.9+dfsg-1 [5665 kB] 1469s Get:112 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-requirejs all 2.3.6+ds+~2.1.34-2 [201 kB] 1469s Get:113 http://ftpmaster.internal/ubuntu oracular/universe amd64 libjs-requirejs-text all 2.0.12-1.1 [9056 B] 1469s Get:114 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-terminado all 0.17.1-1 [15.9 kB] 1469s Get:115 http://ftpmaster.internal/ubuntu oracular/main amd64 python3-prometheus-client all 0.19.0+ds1-1 [41.7 kB] 1469s Get:116 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-send2trash all 1.8.2-1 [15.5 kB] 1469s Get:117 http://ftpmaster.internal/ubuntu oracular/universe amd64 python3-notebook all 6.4.12-2.2ubuntu1 [1566 kB] 1470s Preconfiguring packages ... 1470s Fetched 90.3 MB in 1s (92.6 MB/s) 1470s Selecting previously unselected package libdebuginfod-common. 1470s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 73897 files and directories currently installed.) 1470s Preparing to unpack .../000-libdebuginfod-common_0.190-1.1build4_all.deb ... 1470s Unpacking libdebuginfod-common (0.190-1.1build4) ... 1470s Selecting previously unselected package gcc-13-base:amd64. 1470s Preparing to unpack .../001-gcc-13-base_13.2.0-23ubuntu4_amd64.deb ... 1470s Unpacking gcc-13-base:amd64 (13.2.0-23ubuntu4) ... 1470s Selecting previously unselected package libisl23:amd64. 1470s Preparing to unpack .../002-libisl23_0.26-3build1_amd64.deb ... 1470s Unpacking libisl23:amd64 (0.26-3build1) ... 1470s Selecting previously unselected package libmpc3:amd64. 1470s Preparing to unpack .../003-libmpc3_1.3.1-1build1_amd64.deb ... 1470s Unpacking libmpc3:amd64 (1.3.1-1build1) ... 1470s Selecting previously unselected package cpp-13-x86-64-linux-gnu. 1470s Preparing to unpack .../004-cpp-13-x86-64-linux-gnu_13.2.0-23ubuntu4_amd64.deb ... 1470s Unpacking cpp-13-x86-64-linux-gnu (13.2.0-23ubuntu4) ... 1470s Selecting previously unselected package cpp-13. 1470s Preparing to unpack .../005-cpp-13_13.2.0-23ubuntu4_amd64.deb ... 1470s Unpacking cpp-13 (13.2.0-23ubuntu4) ... 1470s Selecting previously unselected package cpp-x86-64-linux-gnu. 1470s Preparing to unpack .../006-cpp-x86-64-linux-gnu_4%3a13.2.0-7ubuntu1_amd64.deb ... 1470s Unpacking cpp-x86-64-linux-gnu (4:13.2.0-7ubuntu1) ... 1470s Selecting previously unselected package cpp. 1470s Preparing to unpack .../007-cpp_4%3a13.2.0-7ubuntu1_amd64.deb ... 1470s Unpacking cpp (4:13.2.0-7ubuntu1) ... 1470s Selecting previously unselected package libcc1-0:amd64. 1470s Preparing to unpack .../008-libcc1-0_14-20240412-0ubuntu1_amd64.deb ... 1470s Unpacking libcc1-0:amd64 (14-20240412-0ubuntu1) ... 1470s Selecting previously unselected package libgomp1:amd64. 1470s Preparing to unpack .../009-libgomp1_14-20240412-0ubuntu1_amd64.deb ... 1470s Unpacking libgomp1:amd64 (14-20240412-0ubuntu1) ... 1470s Selecting previously unselected package libitm1:amd64. 1470s Preparing to unpack .../010-libitm1_14-20240412-0ubuntu1_amd64.deb ... 1470s Unpacking libitm1:amd64 (14-20240412-0ubuntu1) ... 1470s Selecting previously unselected package libatomic1:amd64. 1470s Preparing to unpack .../011-libatomic1_14-20240412-0ubuntu1_amd64.deb ... 1470s Unpacking libatomic1:amd64 (14-20240412-0ubuntu1) ... 1470s Selecting previously unselected package libasan8:amd64. 1470s Preparing to unpack .../012-libasan8_14-20240412-0ubuntu1_amd64.deb ... 1470s Unpacking libasan8:amd64 (14-20240412-0ubuntu1) ... 1471s Selecting previously unselected package liblsan0:amd64. 1471s Preparing to unpack .../013-liblsan0_14-20240412-0ubuntu1_amd64.deb ... 1471s Unpacking liblsan0:amd64 (14-20240412-0ubuntu1) ... 1471s Selecting previously unselected package libtsan2:amd64. 1471s Preparing to unpack .../014-libtsan2_14-20240412-0ubuntu1_amd64.deb ... 1471s Unpacking libtsan2:amd64 (14-20240412-0ubuntu1) ... 1471s Selecting previously unselected package libubsan1:amd64. 1471s Preparing to unpack .../015-libubsan1_14-20240412-0ubuntu1_amd64.deb ... 1471s Unpacking libubsan1:amd64 (14-20240412-0ubuntu1) ... 1471s Selecting previously unselected package libhwasan0:amd64. 1471s Preparing to unpack .../016-libhwasan0_14-20240412-0ubuntu1_amd64.deb ... 1471s Unpacking libhwasan0:amd64 (14-20240412-0ubuntu1) ... 1471s Selecting previously unselected package libquadmath0:amd64. 1471s Preparing to unpack .../017-libquadmath0_14-20240412-0ubuntu1_amd64.deb ... 1471s Unpacking libquadmath0:amd64 (14-20240412-0ubuntu1) ... 1471s Selecting previously unselected package libgcc-13-dev:amd64. 1471s Preparing to unpack .../018-libgcc-13-dev_13.2.0-23ubuntu4_amd64.deb ... 1471s Unpacking libgcc-13-dev:amd64 (13.2.0-23ubuntu4) ... 1471s Selecting previously unselected package gcc-13-x86-64-linux-gnu. 1471s Preparing to unpack .../019-gcc-13-x86-64-linux-gnu_13.2.0-23ubuntu4_amd64.deb ... 1471s Unpacking gcc-13-x86-64-linux-gnu (13.2.0-23ubuntu4) ... 1471s Selecting previously unselected package gcc-13. 1471s Preparing to unpack .../020-gcc-13_13.2.0-23ubuntu4_amd64.deb ... 1471s Unpacking gcc-13 (13.2.0-23ubuntu4) ... 1471s Selecting previously unselected package gcc-x86-64-linux-gnu. 1471s Preparing to unpack .../021-gcc-x86-64-linux-gnu_4%3a13.2.0-7ubuntu1_amd64.deb ... 1471s Unpacking gcc-x86-64-linux-gnu (4:13.2.0-7ubuntu1) ... 1471s Selecting previously unselected package gcc. 1471s Preparing to unpack .../022-gcc_4%3a13.2.0-7ubuntu1_amd64.deb ... 1471s Unpacking gcc (4:13.2.0-7ubuntu1) ... 1471s Selecting previously unselected package libstdc++-13-dev:amd64. 1471s Preparing to unpack .../023-libstdc++-13-dev_13.2.0-23ubuntu4_amd64.deb ... 1471s Unpacking libstdc++-13-dev:amd64 (13.2.0-23ubuntu4) ... 1471s Selecting previously unselected package g++-13-x86-64-linux-gnu. 1471s Preparing to unpack .../024-g++-13-x86-64-linux-gnu_13.2.0-23ubuntu4_amd64.deb ... 1471s Unpacking g++-13-x86-64-linux-gnu (13.2.0-23ubuntu4) ... 1472s Selecting previously unselected package g++-13. 1472s Preparing to unpack .../025-g++-13_13.2.0-23ubuntu4_amd64.deb ... 1472s Unpacking g++-13 (13.2.0-23ubuntu4) ... 1472s Selecting previously unselected package g++-x86-64-linux-gnu. 1472s Preparing to unpack .../026-g++-x86-64-linux-gnu_4%3a13.2.0-7ubuntu1_amd64.deb ... 1472s Unpacking g++-x86-64-linux-gnu (4:13.2.0-7ubuntu1) ... 1472s Selecting previously unselected package g++. 1472s Preparing to unpack .../027-g++_4%3a13.2.0-7ubuntu1_amd64.deb ... 1472s Unpacking g++ (4:13.2.0-7ubuntu1) ... 1472s Selecting previously unselected package build-essential. 1472s Preparing to unpack .../028-build-essential_12.10ubuntu1_amd64.deb ... 1472s Unpacking build-essential (12.10ubuntu1) ... 1472s Selecting previously unselected package fonts-font-awesome. 1472s Preparing to unpack .../029-fonts-font-awesome_5.0.10+really4.7.0~dfsg-4.1_all.deb ... 1472s Unpacking fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 1472s Selecting previously unselected package fonts-glyphicons-halflings. 1472s Preparing to unpack .../030-fonts-glyphicons-halflings_1.009~3.4.1+dfsg-3_all.deb ... 1472s Unpacking fonts-glyphicons-halflings (1.009~3.4.1+dfsg-3) ... 1472s Selecting previously unselected package fonts-mathjax. 1472s Preparing to unpack .../031-fonts-mathjax_2.7.9+dfsg-1_all.deb ... 1472s Unpacking fonts-mathjax (2.7.9+dfsg-1) ... 1472s Selecting previously unselected package libbabeltrace1:amd64. 1472s Preparing to unpack .../032-libbabeltrace1_1.5.11-3build3_amd64.deb ... 1472s Unpacking libbabeltrace1:amd64 (1.5.11-3build3) ... 1472s Selecting previously unselected package libdebuginfod1t64:amd64. 1472s Preparing to unpack .../033-libdebuginfod1t64_0.190-1.1build4_amd64.deb ... 1472s Unpacking libdebuginfod1t64:amd64 (0.190-1.1build4) ... 1472s Selecting previously unselected package libipt2. 1472s Preparing to unpack .../034-libipt2_2.0.6-1build1_amd64.deb ... 1472s Unpacking libipt2 (2.0.6-1build1) ... 1472s Selecting previously unselected package libpython3.12t64:amd64. 1472s Preparing to unpack .../035-libpython3.12t64_3.12.3-1_amd64.deb ... 1472s Unpacking libpython3.12t64:amd64 (3.12.3-1) ... 1472s Selecting previously unselected package libsource-highlight-common. 1472s Preparing to unpack .../036-libsource-highlight-common_3.1.9-4.3build1_all.deb ... 1472s Unpacking libsource-highlight-common (3.1.9-4.3build1) ... 1472s Selecting previously unselected package libsource-highlight4t64:amd64. 1472s Preparing to unpack .../037-libsource-highlight4t64_3.1.9-4.3build1_amd64.deb ... 1472s Unpacking libsource-highlight4t64:amd64 (3.1.9-4.3build1) ... 1472s Selecting previously unselected package gdb. 1472s Preparing to unpack .../038-gdb_15.0.50.20240403-0ubuntu1_amd64.deb ... 1472s Unpacking gdb (15.0.50.20240403-0ubuntu1) ... 1472s Selecting previously unselected package libjs-underscore. 1472s Preparing to unpack .../039-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ... 1472s Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 1472s Selecting previously unselected package libjs-backbone. 1472s Preparing to unpack .../040-libjs-backbone_1.4.1~dfsg+~1.4.15-3_all.deb ... 1472s Unpacking libjs-backbone (1.4.1~dfsg+~1.4.15-3) ... 1472s Selecting previously unselected package libjs-bootstrap. 1472s Preparing to unpack .../041-libjs-bootstrap_3.4.1+dfsg-3_all.deb ... 1472s Unpacking libjs-bootstrap (3.4.1+dfsg-3) ... 1472s Selecting previously unselected package libjs-jquery. 1472s Preparing to unpack .../042-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ... 1472s Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 1472s Selecting previously unselected package libjs-bootstrap-tour. 1472s Preparing to unpack .../043-libjs-bootstrap-tour_0.12.0+dfsg-5_all.deb ... 1472s Unpacking libjs-bootstrap-tour (0.12.0+dfsg-5) ... 1473s Selecting previously unselected package libjs-es6-promise. 1473s Preparing to unpack .../044-libjs-es6-promise_4.2.8-12_all.deb ... 1473s Unpacking libjs-es6-promise (4.2.8-12) ... 1473s Selecting previously unselected package node-jed. 1473s Preparing to unpack .../045-node-jed_1.1.1-4_all.deb ... 1473s Unpacking node-jed (1.1.1-4) ... 1473s Selecting previously unselected package libjs-jed. 1473s Preparing to unpack .../046-libjs-jed_1.1.1-4_all.deb ... 1473s Unpacking libjs-jed (1.1.1-4) ... 1473s Selecting previously unselected package libjs-jquery-typeahead. 1473s Preparing to unpack .../047-libjs-jquery-typeahead_2.11.0+dfsg1-3_all.deb ... 1473s Unpacking libjs-jquery-typeahead (2.11.0+dfsg1-3) ... 1473s Selecting previously unselected package libjs-jquery-ui. 1473s Preparing to unpack .../048-libjs-jquery-ui_1.13.2+dfsg-1_all.deb ... 1473s Unpacking libjs-jquery-ui (1.13.2+dfsg-1) ... 1473s Selecting previously unselected package libjs-moment. 1473s Preparing to unpack .../049-libjs-moment_2.29.4+ds-1_all.deb ... 1473s Unpacking libjs-moment (2.29.4+ds-1) ... 1473s Selecting previously unselected package libjs-text-encoding. 1473s Preparing to unpack .../050-libjs-text-encoding_0.7.0-5_all.deb ... 1473s Unpacking libjs-text-encoding (0.7.0-5) ... 1473s Selecting previously unselected package libjs-xterm. 1473s Preparing to unpack .../051-libjs-xterm_5.3.0-2_all.deb ... 1473s Unpacking libjs-xterm (5.3.0-2) ... 1473s Selecting previously unselected package libnorm1t64:amd64. 1473s Preparing to unpack .../052-libnorm1t64_1.5.9+dfsg-3.1build1_amd64.deb ... 1473s Unpacking libnorm1t64:amd64 (1.5.9+dfsg-3.1build1) ... 1473s Selecting previously unselected package libpgm-5.3-0t64:amd64. 1473s Preparing to unpack .../053-libpgm-5.3-0t64_5.3.128~dfsg-2.1build1_amd64.deb ... 1473s Unpacking libpgm-5.3-0t64:amd64 (5.3.128~dfsg-2.1build1) ... 1473s Selecting previously unselected package libsodium23:amd64. 1473s Preparing to unpack .../054-libsodium23_1.0.18-1build3_amd64.deb ... 1473s Unpacking libsodium23:amd64 (1.0.18-1build3) ... 1473s Selecting previously unselected package libxslt1.1:amd64. 1473s Preparing to unpack .../055-libxslt1.1_1.1.39-0exp1build1_amd64.deb ... 1473s Unpacking libxslt1.1:amd64 (1.1.39-0exp1build1) ... 1473s Selecting previously unselected package libzmq5:amd64. 1473s Preparing to unpack .../056-libzmq5_4.3.5-1build2_amd64.deb ... 1473s Unpacking libzmq5:amd64 (4.3.5-1build2) ... 1473s Selecting previously unselected package python-tinycss2-common. 1473s Preparing to unpack .../057-python-tinycss2-common_1.2.1-2_all.deb ... 1473s Unpacking python-tinycss2-common (1.2.1-2) ... 1473s Selecting previously unselected package python3-all. 1473s Preparing to unpack .../058-python3-all_3.12.3-0ubuntu1_amd64.deb ... 1473s Unpacking python3-all (3.12.3-0ubuntu1) ... 1473s Selecting previously unselected package python3-argon2. 1473s Preparing to unpack .../059-python3-argon2_21.1.0-2build1_amd64.deb ... 1473s Unpacking python3-argon2 (21.1.0-2build1) ... 1473s Selecting previously unselected package python3-asttokens. 1473s Preparing to unpack .../060-python3-asttokens_2.4.1-1_all.deb ... 1473s Unpacking python3-asttokens (2.4.1-1) ... 1473s Selecting previously unselected package python3-webencodings. 1473s Preparing to unpack .../061-python3-webencodings_0.5.1-5_all.deb ... 1473s Unpacking python3-webencodings (0.5.1-5) ... 1473s Selecting previously unselected package python3-html5lib. 1473s Preparing to unpack .../062-python3-html5lib_1.1-6_all.deb ... 1473s Unpacking python3-html5lib (1.1-6) ... 1473s Selecting previously unselected package python3-bleach. 1473s Preparing to unpack .../063-python3-bleach_6.1.0-2_all.deb ... 1473s Unpacking python3-bleach (6.1.0-2) ... 1473s Selecting previously unselected package python3-soupsieve. 1473s Preparing to unpack .../064-python3-soupsieve_2.5-1_all.deb ... 1473s Unpacking python3-soupsieve (2.5-1) ... 1473s Selecting previously unselected package python3-bs4. 1473s Preparing to unpack .../065-python3-bs4_4.12.3-1_all.deb ... 1473s Unpacking python3-bs4 (4.12.3-1) ... 1473s Selecting previously unselected package python3-bytecode. 1473s Preparing to unpack .../066-python3-bytecode_0.15.1-3_all.deb ... 1473s Unpacking python3-bytecode (0.15.1-3) ... 1473s Selecting previously unselected package python3-traitlets. 1473s Preparing to unpack .../067-python3-traitlets_5.14.3-1_all.deb ... 1473s Unpacking python3-traitlets (5.14.3-1) ... 1473s Selecting previously unselected package python3-comm. 1473s Preparing to unpack .../068-python3-comm_0.2.1-1_all.deb ... 1473s Unpacking python3-comm (0.2.1-1) ... 1473s Selecting previously unselected package python3-coverage. 1473s Preparing to unpack .../069-python3-coverage_7.4.4+dfsg1-0ubuntu2_amd64.deb ... 1473s Unpacking python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 1473s Selecting previously unselected package python3-dateutil. 1473s Preparing to unpack .../070-python3-dateutil_2.8.2-3ubuntu1_all.deb ... 1473s Unpacking python3-dateutil (2.8.2-3ubuntu1) ... 1473s Selecting previously unselected package python3-pydevd. 1473s Preparing to unpack .../071-python3-pydevd_2.10.0+ds-10ubuntu1_amd64.deb ... 1473s Unpacking python3-pydevd (2.10.0+ds-10ubuntu1) ... 1473s Selecting previously unselected package python3-debugpy. 1473s Preparing to unpack .../072-python3-debugpy_1.8.0+ds-4ubuntu4_all.deb ... 1473s Unpacking python3-debugpy (1.8.0+ds-4ubuntu4) ... 1473s Selecting previously unselected package python3-decorator. 1473s Preparing to unpack .../073-python3-decorator_5.1.1-5_all.deb ... 1473s Unpacking python3-decorator (5.1.1-5) ... 1473s Selecting previously unselected package python3-defusedxml. 1473s Preparing to unpack .../074-python3-defusedxml_0.7.1-2_all.deb ... 1473s Unpacking python3-defusedxml (0.7.1-2) ... 1473s Selecting previously unselected package python3-entrypoints. 1473s Preparing to unpack .../075-python3-entrypoints_0.4-2_all.deb ... 1473s Unpacking python3-entrypoints (0.4-2) ... 1474s Selecting previously unselected package python3-executing. 1474s Preparing to unpack .../076-python3-executing_2.0.1-0.1_all.deb ... 1474s Unpacking python3-executing (2.0.1-0.1) ... 1474s Selecting previously unselected package python3-fastjsonschema. 1474s Preparing to unpack .../077-python3-fastjsonschema_2.19.0-1_all.deb ... 1474s Unpacking python3-fastjsonschema (2.19.0-1) ... 1474s Selecting previously unselected package python3-parso. 1474s Preparing to unpack .../078-python3-parso_0.8.3-1_all.deb ... 1474s Unpacking python3-parso (0.8.3-1) ... 1474s Selecting previously unselected package python3-typeshed. 1474s Preparing to unpack .../079-python3-typeshed_0.0~git20231111.6764465-3_all.deb ... 1474s Unpacking python3-typeshed (0.0~git20231111.6764465-3) ... 1474s Selecting previously unselected package python3-jedi. 1474s Preparing to unpack .../080-python3-jedi_0.19.1+ds1-1_all.deb ... 1474s Unpacking python3-jedi (0.19.1+ds1-1) ... 1474s Selecting previously unselected package python3-matplotlib-inline. 1474s Preparing to unpack .../081-python3-matplotlib-inline_0.1.6-2_all.deb ... 1474s Unpacking python3-matplotlib-inline (0.1.6-2) ... 1474s Selecting previously unselected package python3-ptyprocess. 1474s Preparing to unpack .../082-python3-ptyprocess_0.7.0-5_all.deb ... 1474s Unpacking python3-ptyprocess (0.7.0-5) ... 1474s Selecting previously unselected package python3-pexpect. 1474s Preparing to unpack .../083-python3-pexpect_4.9-2_all.deb ... 1474s Unpacking python3-pexpect (4.9-2) ... 1475s Selecting previously unselected package python3-wcwidth. 1475s Preparing to unpack .../084-python3-wcwidth_0.2.5+dfsg1-1.1ubuntu1_all.deb ... 1475s Unpacking python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 1475s Selecting previously unselected package python3-prompt-toolkit. 1475s Preparing to unpack .../085-python3-prompt-toolkit_3.0.43-1_all.deb ... 1475s Unpacking python3-prompt-toolkit (3.0.43-1) ... 1475s Selecting previously unselected package python3-pure-eval. 1475s Preparing to unpack .../086-python3-pure-eval_0.2.2-2_all.deb ... 1475s Unpacking python3-pure-eval (0.2.2-2) ... 1475s Selecting previously unselected package python3-stack-data. 1475s Preparing to unpack .../087-python3-stack-data_0.6.3-1_all.deb ... 1475s Unpacking python3-stack-data (0.6.3-1) ... 1475s Selecting previously unselected package python3-ipython. 1475s Preparing to unpack .../088-python3-ipython_8.20.0-1_all.deb ... 1475s Unpacking python3-ipython (8.20.0-1) ... 1475s Selecting previously unselected package python3-platformdirs. 1475s Preparing to unpack .../089-python3-platformdirs_4.2.0-1_all.deb ... 1475s Unpacking python3-platformdirs (4.2.0-1) ... 1475s Selecting previously unselected package python3-jupyter-core. 1475s Preparing to unpack .../090-python3-jupyter-core_5.3.2-1ubuntu1_all.deb ... 1475s Unpacking python3-jupyter-core (5.3.2-1ubuntu1) ... 1475s Selecting previously unselected package python3-nest-asyncio. 1475s Preparing to unpack .../091-python3-nest-asyncio_1.5.4-1_all.deb ... 1475s Unpacking python3-nest-asyncio (1.5.4-1) ... 1475s Selecting previously unselected package python3-tornado. 1475s Preparing to unpack .../092-python3-tornado_6.4.0-1build1_amd64.deb ... 1475s Unpacking python3-tornado (6.4.0-1build1) ... 1475s Selecting previously unselected package python3-py. 1475s Preparing to unpack .../093-python3-py_1.11.0-2_all.deb ... 1475s Unpacking python3-py (1.11.0-2) ... 1475s Selecting previously unselected package python3-zmq. 1475s Preparing to unpack .../094-python3-zmq_24.0.1-5build1_amd64.deb ... 1475s Unpacking python3-zmq (24.0.1-5build1) ... 1475s Selecting previously unselected package python3-jupyter-client. 1475s Preparing to unpack .../095-python3-jupyter-client_7.4.9-2ubuntu1_all.deb ... 1475s Unpacking python3-jupyter-client (7.4.9-2ubuntu1) ... 1475s Selecting previously unselected package python3-packaging. 1475s Preparing to unpack .../096-python3-packaging_24.0-1_all.deb ... 1475s Unpacking python3-packaging (24.0-1) ... 1475s Selecting previously unselected package python3-psutil. 1475s Preparing to unpack .../097-python3-psutil_5.9.8-2build2_amd64.deb ... 1475s Unpacking python3-psutil (5.9.8-2build2) ... 1475s Selecting previously unselected package python3-ipykernel. 1475s Preparing to unpack .../098-python3-ipykernel_6.29.3-1_all.deb ... 1475s Unpacking python3-ipykernel (6.29.3-1) ... 1475s Selecting previously unselected package python3-ipython-genutils. 1475s Preparing to unpack .../099-python3-ipython-genutils_0.2.0-6_all.deb ... 1475s Unpacking python3-ipython-genutils (0.2.0-6) ... 1475s Selecting previously unselected package python3-jupyterlab-pygments. 1475s Preparing to unpack .../100-python3-jupyterlab-pygments_0.2.2-3_all.deb ... 1475s Unpacking python3-jupyterlab-pygments (0.2.2-3) ... 1475s Selecting previously unselected package python3-lxml:amd64. 1475s Preparing to unpack .../101-python3-lxml_5.2.1-1_amd64.deb ... 1475s Unpacking python3-lxml:amd64 (5.2.1-1) ... 1475s Selecting previously unselected package python3-lxml-html-clean. 1475s Preparing to unpack .../102-python3-lxml-html-clean_0.1.1-1_all.deb ... 1475s Unpacking python3-lxml-html-clean (0.1.1-1) ... 1475s Selecting previously unselected package python3-nbformat. 1475s Preparing to unpack .../103-python3-nbformat_5.9.1-1_all.deb ... 1475s Unpacking python3-nbformat (5.9.1-1) ... 1475s Selecting previously unselected package python3-nbclient. 1475s Preparing to unpack .../104-python3-nbclient_0.8.0-1_all.deb ... 1475s Unpacking python3-nbclient (0.8.0-1) ... 1475s Selecting previously unselected package python3-pandocfilters. 1475s Preparing to unpack .../105-python3-pandocfilters_1.5.1-1_all.deb ... 1475s Unpacking python3-pandocfilters (1.5.1-1) ... 1475s Selecting previously unselected package python3-tinycss2. 1475s Preparing to unpack .../106-python3-tinycss2_1.2.1-2_all.deb ... 1475s Unpacking python3-tinycss2 (1.2.1-2) ... 1475s Selecting previously unselected package python3-nbconvert. 1475s Preparing to unpack .../107-python3-nbconvert_6.5.3-5_all.deb ... 1475s Unpacking python3-nbconvert (6.5.3-5) ... 1475s Selecting previously unselected package libjs-codemirror. 1475s Preparing to unpack .../108-libjs-codemirror_5.65.0+~cs5.83.9-3_all.deb ... 1475s Unpacking libjs-codemirror (5.65.0+~cs5.83.9-3) ... 1476s Selecting previously unselected package libjs-marked. 1476s Preparing to unpack .../109-libjs-marked_4.2.3+ds+~4.0.7-3_all.deb ... 1476s Unpacking libjs-marked (4.2.3+ds+~4.0.7-3) ... 1476s Selecting previously unselected package libjs-mathjax. 1476s Preparing to unpack .../110-libjs-mathjax_2.7.9+dfsg-1_all.deb ... 1476s Unpacking libjs-mathjax (2.7.9+dfsg-1) ... 1476s Selecting previously unselected package libjs-requirejs. 1476s Preparing to unpack .../111-libjs-requirejs_2.3.6+ds+~2.1.34-2_all.deb ... 1476s Unpacking libjs-requirejs (2.3.6+ds+~2.1.34-2) ... 1476s Selecting previously unselected package libjs-requirejs-text. 1476s Preparing to unpack .../112-libjs-requirejs-text_2.0.12-1.1_all.deb ... 1476s Unpacking libjs-requirejs-text (2.0.12-1.1) ... 1476s Selecting previously unselected package python3-terminado. 1476s Preparing to unpack .../113-python3-terminado_0.17.1-1_all.deb ... 1476s Unpacking python3-terminado (0.17.1-1) ... 1476s Selecting previously unselected package python3-prometheus-client. 1476s Preparing to unpack .../114-python3-prometheus-client_0.19.0+ds1-1_all.deb ... 1476s Unpacking python3-prometheus-client (0.19.0+ds1-1) ... 1477s Selecting previously unselected package python3-send2trash. 1477s Preparing to unpack .../115-python3-send2trash_1.8.2-1_all.deb ... 1477s Unpacking python3-send2trash (1.8.2-1) ... 1477s Selecting previously unselected package python3-notebook. 1477s Preparing to unpack .../116-python3-notebook_6.4.12-2.2ubuntu1_all.deb ... 1477s Unpacking python3-notebook (6.4.12-2.2ubuntu1) ... 1477s Setting up python3-entrypoints (0.4-2) ... 1477s Setting up libjs-jquery-typeahead (2.11.0+dfsg1-3) ... 1477s Setting up python3-tornado (6.4.0-1build1) ... 1477s Setting up libnorm1t64:amd64 (1.5.9+dfsg-3.1build1) ... 1477s Setting up python3-pure-eval (0.2.2-2) ... 1477s Setting up python3-send2trash (1.8.2-1) ... 1477s Setting up fonts-mathjax (2.7.9+dfsg-1) ... 1477s Setting up libsodium23:amd64 (1.0.18-1build3) ... 1477s Setting up libjs-mathjax (2.7.9+dfsg-1) ... 1477s Setting up python3-py (1.11.0-2) ... 1478s Setting up libdebuginfod-common (0.190-1.1build4) ... 1478s Setting up libjs-requirejs-text (2.0.12-1.1) ... 1478s Setting up python3-parso (0.8.3-1) ... 1478s Setting up python3-defusedxml (0.7.1-2) ... 1478s Setting up python3-ipython-genutils (0.2.0-6) ... 1478s Setting up python3-asttokens (2.4.1-1) ... 1478s Setting up fonts-glyphicons-halflings (1.009~3.4.1+dfsg-3) ... 1478s Setting up python3-all (3.12.3-0ubuntu1) ... 1478s Setting up python3-coverage (7.4.4+dfsg1-0ubuntu2) ... 1478s Setting up libjs-moment (2.29.4+ds-1) ... 1478s Setting up python3-pandocfilters (1.5.1-1) ... 1478s Setting up libgomp1:amd64 (14-20240412-0ubuntu1) ... 1478s Setting up libjs-requirejs (2.3.6+ds+~2.1.34-2) ... 1478s Setting up libjs-es6-promise (4.2.8-12) ... 1478s Setting up libjs-text-encoding (0.7.0-5) ... 1478s Setting up python3-webencodings (0.5.1-5) ... 1479s Setting up python3-platformdirs (4.2.0-1) ... 1479s Setting up python3-psutil (5.9.8-2build2) ... 1479s Setting up libsource-highlight-common (3.1.9-4.3build1) ... 1479s Setting up python3-jupyterlab-pygments (0.2.2-3) ... 1479s Setting up libpython3.12t64:amd64 (3.12.3-1) ... 1479s Setting up libpgm-5.3-0t64:amd64 (5.3.128~dfsg-2.1build1) ... 1479s Setting up python3-decorator (5.1.1-5) ... 1479s Setting up python3-packaging (24.0-1) ... 1479s Setting up gcc-13-base:amd64 (13.2.0-23ubuntu4) ... 1479s Setting up python3-wcwidth (0.2.5+dfsg1-1.1ubuntu1) ... 1479s Setting up node-jed (1.1.1-4) ... 1479s Setting up python3-typeshed (0.0~git20231111.6764465-3) ... 1479s Setting up python3-executing (2.0.1-0.1) ... 1480s Setting up libjs-xterm (5.3.0-2) ... 1480s Setting up python3-nest-asyncio (1.5.4-1) ... 1480s Setting up libquadmath0:amd64 (14-20240412-0ubuntu1) ... 1480s Setting up python3-bytecode (0.15.1-3) ... 1480s Setting up libjs-codemirror (5.65.0+~cs5.83.9-3) ... 1480s Setting up libmpc3:amd64 (1.3.1-1build1) ... 1480s Setting up libatomic1:amd64 (14-20240412-0ubuntu1) ... 1480s Setting up libjs-jed (1.1.1-4) ... 1480s Setting up libipt2 (2.0.6-1build1) ... 1480s Setting up python3-html5lib (1.1-6) ... 1480s Setting up libbabeltrace1:amd64 (1.5.11-3build3) ... 1480s Setting up libubsan1:amd64 (14-20240412-0ubuntu1) ... 1480s Setting up python3-fastjsonschema (2.19.0-1) ... 1480s Setting up libhwasan0:amd64 (14-20240412-0ubuntu1) ... 1480s Setting up python3-traitlets (5.14.3-1) ... 1480s Setting up libasan8:amd64 (14-20240412-0ubuntu1) ... 1480s Setting up python-tinycss2-common (1.2.1-2) ... 1480s Setting up libxslt1.1:amd64 (1.1.39-0exp1build1) ... 1480s Setting up python3-argon2 (21.1.0-2build1) ... 1480s Setting up python3-dateutil (2.8.2-3ubuntu1) ... 1481s Setting up libtsan2:amd64 (14-20240412-0ubuntu1) ... 1481s Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ... 1481s Setting up libisl23:amd64 (0.26-3build1) ... 1481s Setting up python3-stack-data (0.6.3-1) ... 1481s Setting up python3-soupsieve (2.5-1) ... 1481s Setting up fonts-font-awesome (5.0.10+really4.7.0~dfsg-4.1) ... 1481s Setting up libcc1-0:amd64 (14-20240412-0ubuntu1) ... 1481s Setting up python3-jupyter-core (5.3.2-1ubuntu1) ... 1481s Setting up liblsan0:amd64 (14-20240412-0ubuntu1) ... 1481s Setting up libjs-bootstrap (3.4.1+dfsg-3) ... 1481s Setting up libitm1:amd64 (14-20240412-0ubuntu1) ... 1481s Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ... 1481s Setting up python3-ptyprocess (0.7.0-5) ... 1481s Setting up libjs-marked (4.2.3+ds+~4.0.7-3) ... 1481s Setting up python3-prompt-toolkit (3.0.43-1) ... 1481s Setting up libdebuginfod1t64:amd64 (0.190-1.1build4) ... 1481s Setting up python3-tinycss2 (1.2.1-2) ... 1481s Setting up libzmq5:amd64 (4.3.5-1build2) ... 1481s Setting up python3-jedi (0.19.1+ds1-1) ... 1482s Setting up cpp-13-x86-64-linux-gnu (13.2.0-23ubuntu4) ... 1482s Setting up libjs-bootstrap-tour (0.12.0+dfsg-5) ... 1482s Setting up libjs-backbone (1.4.1~dfsg+~1.4.15-3) ... 1482s Setting up libsource-highlight4t64:amd64 (3.1.9-4.3build1) ... 1482s Setting up python3-nbformat (5.9.1-1) ... 1482s Setting up python3-bs4 (4.12.3-1) ... 1482s Setting up python3-bleach (6.1.0-2) ... 1482s Setting up python3-matplotlib-inline (0.1.6-2) ... 1482s Setting up python3-comm (0.2.1-1) ... 1482s Setting up python3-prometheus-client (0.19.0+ds1-1) ... 1483s Setting up gdb (15.0.50.20240403-0ubuntu1) ... 1483s Setting up libjs-jquery-ui (1.13.2+dfsg-1) ... 1483s Setting up python3-pexpect (4.9-2) ... 1483s Setting up python3-zmq (24.0.1-5build1) ... 1483s Setting up python3-terminado (0.17.1-1) ... 1483s Setting up libgcc-13-dev:amd64 (13.2.0-23ubuntu4) ... 1483s Setting up python3-lxml:amd64 (5.2.1-1) ... 1483s Setting up python3-jupyter-client (7.4.9-2ubuntu1) ... 1483s Setting up python3-pydevd (2.10.0+ds-10ubuntu1) ... 1484s Setting up libstdc++-13-dev:amd64 (13.2.0-23ubuntu4) ... 1484s Setting up cpp-x86-64-linux-gnu (4:13.2.0-7ubuntu1) ... 1484s Setting up cpp-13 (13.2.0-23ubuntu4) ... 1484s Setting up gcc-13-x86-64-linux-gnu (13.2.0-23ubuntu4) ... 1484s Setting up python3-debugpy (1.8.0+ds-4ubuntu4) ... 1484s Setting up python3-nbclient (0.8.0-1) ... 1484s Setting up python3-ipython (8.20.0-1) ... 1485s Setting up python3-ipykernel (6.29.3-1) ... 1485s Setting up gcc-13 (13.2.0-23ubuntu4) ... 1485s Setting up python3-lxml-html-clean (0.1.1-1) ... 1485s Setting up python3-nbconvert (6.5.3-5) ... 1485s Setting up cpp (4:13.2.0-7ubuntu1) ... 1485s Setting up g++-13-x86-64-linux-gnu (13.2.0-23ubuntu4) ... 1485s Setting up gcc-x86-64-linux-gnu (4:13.2.0-7ubuntu1) ... 1485s Setting up python3-notebook (6.4.12-2.2ubuntu1) ... 1486s Setting up gcc (4:13.2.0-7ubuntu1) ... 1486s Setting up g++-x86-64-linux-gnu (4:13.2.0-7ubuntu1) ... 1486s Setting up g++-13 (13.2.0-23ubuntu4) ... 1486s Setting up g++ (4:13.2.0-7ubuntu1) ... 1486s update-alternatives: using /usr/bin/g++ to provide /usr/bin/c++ (c++) in auto mode 1486s Setting up build-essential (12.10ubuntu1) ... 1486s Processing triggers for man-db (2.12.0-4build2) ... 1487s Processing triggers for libc-bin (2.39-0ubuntu8) ... 1487s Reading package lists... 1487s Building dependency tree... 1487s Reading state information... 1488s Starting pkgProblemResolver with broken count: 0 1488s Starting 2 pkgProblemResolver with broken count: 0 1488s Done 1488s The following NEW packages will be installed: 1488s autopkgtest-satdep 1488s 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 1488s Need to get 0 B/700 B of archives. 1488s After this operation, 0 B of additional disk space will be used. 1488s Get:1 /tmp/autopkgtest.FMSSaJ/6-autopkgtest-satdep.deb autopkgtest-satdep amd64 0 [700 B] 1488s Selecting previously unselected package autopkgtest-satdep. 1488s (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 91437 files and directories currently installed.) 1488s Preparing to unpack .../6-autopkgtest-satdep.deb ... 1488s Unpacking autopkgtest-satdep (0) ... 1488s Setting up autopkgtest-satdep (0) ... 1489s autopkgtest: WARNING: package python3-notebook:i386 is not installed though it should be 1490s (Reading database ... 91437 files and directories currently installed.) 1490s Removing autopkgtest-satdep (0) ... 1491s autopkgtest [23:34:19]: test autodep8-python3: set -e ; for py in $(py3versions -r 2>/dev/null) ; do cd "$AUTOPKGTEST_TMP" ; echo "Testing with $py:" ; $py -c "import notebook; print(notebook)" ; done 1491s autopkgtest [23:34:19]: test autodep8-python3: [----------------------- 1491s Testing with python3.12: 1491s 1492s autopkgtest [23:34:20]: test autodep8-python3: -----------------------] 1492s autopkgtest [23:34:20]: test autodep8-python3: - - - - - - - - - - results - - - - - - - - - - 1492s autodep8-python3 PASS (superficial) 1492s autopkgtest [23:34:20]: @@@@@@@@@@@@@@@@@@@@ summary 1492s pytest FAIL non-zero exit status 1 1492s command1 PASS (superficial) 1492s autodep8-python3 PASS (superficial) 1511s Creating nova instance adt-oracular-i386-jupyter-notebook-20240513-230927-juju-7f2275-prod-proposed-migration-environment-3-94b9549f-2e0a-478a-a6e0-92e38f878270 from image adt/ubuntu-oracular-amd64-server-20240513.img (UUID 3680964e-3e6d-4e07-b76c-c2eef9987d4d)... 1511s Creating nova instance adt-oracular-i386-jupyter-notebook-20240513-230927-juju-7f2275-prod-proposed-migration-environment-3-94b9549f-2e0a-478a-a6e0-92e38f878270 from image adt/ubuntu-oracular-amd64-server-20240513.img (UUID 3680964e-3e6d-4e07-b76c-c2eef9987d4d)... 1511s Creating nova instance adt-oracular-i386-jupyter-notebook-20240513-230927-juju-7f2275-prod-proposed-migration-environment-3-94b9549f-2e0a-478a-a6e0-92e38f878270 from image adt/ubuntu-oracular-amd64-server-20240513.img (UUID 3680964e-3e6d-4e07-b76c-c2eef9987d4d)...