You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When roles/kubespray-defaults/tasks/fallback_ips.yml runs on a inventory with an unreachable host, it'll exit the entire play after the setup task, with NO MORE HOSTS LEFT.
What did you expect to happen?
I expect the entire kubespray-defaults role to finish running, but it exits the play after the single task.
How can we reproduce it (as minimally and precisely as possible)?
See "How can we reproduce it" section. Just that inventory, no variables.
Command used to invoke ansible
See "How can we reproduce it" section
Output of ansible run
PLAY [Prepare nodes for upgrade] ********************************************************************************************************************************************************************************
TASK [kubespray-defaults : Gather ansible_default_ipv4 from all hosts] ******************************************************************************************************************************************
ok: [k8s1.local] => (item=k8s1.local)
[WARNING]: Unhandled error in Python interpreter discovery for host k8s1.local: Failed to connect to the host via ssh: ssh: connect to host k8s3.local port 22: Connection timed out
failed: [k8s1.local -> k8s3.local] (item=k8s3.local) => {"ansible_loop_var": "item", "item": "k8s3.local", "msg": "Data could not be sent to remote host \"k8s3.local\". Make sure this host can be reached over ssh: ssh: connect to host k8s3.local port 22: Connection timed out\r\n", "unreachable": true}
ok: [k8s1.local -> k8s2.local] => (item=k8s2.local)
fatal: [k8s1.local -> {{ item }}]: UNREACHABLE! => {"changed": false, "msg": "All items completed", "results": [{"ansible_facts": {"ansible_default_ipv4": {"address": "10.88.111.29", "alias": "eth0", "broadcast": "10.88.111.255", "gateway": "10.88.111.254", "interface": "eth0", "macaddress": "bc:24:11:41:88:12", "mtu": 1500, "netmask": "255.255.252.0", "network": "10.88.108.0", "prefix": "22", "type": "ether"}, "discovered_interpreter_python": "/usr/bin/python3"}, "ansible_loop_var": "item", "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": ["ansible_default_ipv4"], "gather_subset": ["!all", "network"], "gather_timeout": 10}}, "item": "k8s1.local"}, {"ansible_loop_var": "item", "item": "k8s3.local", "msg": "Data could not be sent to remote host \"k8s3.local\". Make sure this host can be reached over ssh: ssh: connect to host k8s3.local port 22: Connection timed out\r\n", "unreachable": true}, {"ansible_facts": {"ansible_default_ipv4": {"address": "10.88.111.30", "alias": "eth0", "broadcast": "10.88.111.255", "gateway": "10.88.111.254", "interface": "eth0", "macaddress": "bc:24:11:be:42:a6", "mtu": 1500, "netmask": "255.255.252.0", "network": "10.88.108.0", "prefix": "22", "type": "ether"}, "discovered_interpreter_python": "/usr/bin/python3"}, "ansible_loop_var": "item", "changed": false, "failed": false, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": ["ansible_default_ipv4"], "gather_subset": ["!all", "network"], "gather_timeout": 10}}, "item": "k8s2.local"}]}
...ignoring
NO MORE HOSTS LEFT **********************************************************************************************************************************************************************************************
PLAY RECAP ******************************************************************************************************************************************************************************************************
k8s1.local : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=1
Anything else we need to know
In the PR #10601, it added ignore_unreachable: true. That made it so the Play Recap had ignored=1 instead of unreachable=1. But ultimately it doesn't solve the issue of the play exiting early.
The text was updated successfully, but these errors were encountered:
What happened?
This is a continuation of #10313.
When
roles/kubespray-defaults/tasks/fallback_ips.yml
runs on a inventory with an unreachable host, it'll exit the entire play after thesetup
task, withNO MORE HOSTS LEFT
.What did you expect to happen?
I expect the entire
kubespray-defaults
role to finish running, but it exits the play after the single task.How can we reproduce it (as minimally and precisely as possible)?
Minimal inventory
And then this minimal playbook
Execute with
ansible-playbook -i hosts.ini bug.yml
OS
Version of Ansible
Tried both:
and
Version of Python
Python 3.9.18
Version of Kubespray (commit)
66eaba3
Network plugin used
calico
Full inventory with variables
See "How can we reproduce it" section. Just that inventory, no variables.
Command used to invoke ansible
See "How can we reproduce it" section
Output of ansible run
Anything else we need to know
In the PR #10601, it added
ignore_unreachable: true
. That made it so the Play Recap hadignored=1
instead ofunreachable=1
. But ultimately it doesn't solve the issue of the play exiting early.The text was updated successfully, but these errors were encountered: