Using /etc/ansible/ansible.cfg as config file [WARNING]: running playbook inside collection fedora.linux_system_roles PLAY [Run playbook 'playbooks/tests_ipv6.yml' with initscripts as provider] **** TASK [Gathering Facts] ********************************************************* Saturday 06 July 2024 06:58:32 -0400 (0:00:00.024) 0:00:00.024 ********* ok: [managed_node1] TASK [Include the task 'el_repo_setup.yml'] ************************************ Saturday 06 July 2024 06:58:33 -0400 (0:00:00.901) 0:00:00.925 ********* included: /var/ARTIFACTS/work-generalukpimi5l/plans/general/tree/tmp.l3f5UaCnbZ/ansible_collections/fedora/linux_system_roles/tests/network/tasks/el_repo_setup.yml for managed_node1 TASK [Gather the minimum subset of ansible_facts required by the network role test] *** Saturday 06 July 2024 06:58:33 -0400 (0:00:00.033) 0:00:00.959 ********* ok: [managed_node1] TASK [Check if system is ostree] *********************************************** Saturday 06 July 2024 06:58:34 -0400 (0:00:00.428) 0:00:01.388 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [Set flag to indicate system is ostree] *********************************** Saturday 06 July 2024 06:58:34 -0400 (0:00:00.408) 0:00:01.796 ********* ok: [managed_node1] => { "ansible_facts": { "__network_is_ostree": false }, "changed": false } TASK [Fix CentOS6 Base repo] *************************************************** Saturday 06 July 2024 06:58:34 -0400 (0:00:00.049) 0:00:01.846 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Include the task 'enable_epel.yml'] ************************************** Saturday 06 July 2024 06:58:34 -0400 (0:00:00.045) 0:00:01.891 ********* included: /var/ARTIFACTS/work-generalukpimi5l/plans/general/tree/tmp.l3f5UaCnbZ/ansible_collections/fedora/linux_system_roles/tests/network/tasks/enable_epel.yml for managed_node1 TASK [Create EPEL 7] *********************************************************** Saturday 06 July 2024 06:58:34 -0400 (0:00:00.078) 0:00:01.970 ********* ok: [managed_node1] => { "attempts": 1, "changed": false, "cmd": [ "rpm", "-iv", "https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm" ], "rc": 0 } STDOUT: skipped, since /etc/yum.repos.d/epel.repo exists TASK [Install yum-utils package] *********************************************** Saturday 06 July 2024 06:58:34 -0400 (0:00:00.386) 0:00:02.357 ********* ok: [managed_node1] => { "changed": false, "rc": 0, "results": [ "yum-utils-1.1.31-54.el7_8.noarch providing yum-utils is already installed" ] } TASK [Enable EPEL 7] *********************************************************** Saturday 06 July 2024 06:58:35 -0400 (0:00:00.688) 0:00:03.045 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "yum-config-manager", "--enable", "epel" ], "delta": "0:00:00.187042", "end": "2024-07-06 06:58:36.101336", "rc": 0, "start": "2024-07-06 06:58:35.914294" } STDOUT: Loaded plugins: fastestmirror ================================== repo: epel ================================== [epel] async = True bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/7 baseurl = cache = 0 cachedir = /var/cache/yum/x86_64/7/epel check_config_file_age = True compare_providers_priority = 80 cost = 1000 deltarpm_metadata_percentage = 100 deltarpm_percentage = enabled = True enablegroups = True exclude = failovermethod = priority ftp_disable_epsv = False gpgcadir = /var/lib/yum/repos/x86_64/7/epel/gpgcadir gpgcakey = gpgcheck = True gpgdir = /var/lib/yum/repos/x86_64/7/epel/gpgdir gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 hdrdir = /var/cache/yum/x86_64/7/epel/headers http_caching = all includepkgs = ip_resolve = keepalive = True keepcache = False mddownloadpolicy = sqlite mdpolicy = group:small mediaid = metadata_expire = 21600 metadata_expire_filter = read-only:present metalink = https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=x86_64&infra=stock&content=centos minrate = 0 mirrorlist = mirrorlist_expire = 86400 name = Extra Packages for Enterprise Linux 7 - x86_64 old_base_cache_dir = password = persistdir = /var/lib/yum/repos/x86_64/7/epel pkgdir = /var/cache/yum/x86_64/7/epel/packages proxy = False proxy_dict = proxy_password = proxy_username = repo_gpgcheck = False retries = 10 skip_if_unavailable = False ssl_check_cert_permissions = True sslcacert = sslclientcert = sslclientkey = sslverify = True throttle = 0 timeout = 30.0 ui_id = epel/x86_64 ui_repoid_vars = releasever, basearch username = TASK [Enable EPEL 8] *********************************************************** Saturday 06 July 2024 06:58:36 -0400 (0:00:00.496) 0:00:03.542 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Enable EPEL 6] *********************************************************** Saturday 06 July 2024 06:58:36 -0400 (0:00:00.027) 0:00:03.569 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Set network provider to 'initscripts'] *********************************** Saturday 06 July 2024 06:58:36 -0400 (0:00:00.024) 0:00:03.594 ********* ok: [managed_node1] => { "ansible_facts": { "network_provider": "initscripts" }, "changed": false } PLAY [Play for testing IPv6 config] ******************************************** TASK [Gathering Facts] ********************************************************* Saturday 06 July 2024 06:58:36 -0400 (0:00:00.028) 0:00:03.623 ********* ok: [managed_node1] TASK [Include the task 'show_interfaces.yml'] ********************************** Saturday 06 July 2024 06:58:36 -0400 (0:00:00.672) 0:00:04.296 ********* included: /var/ARTIFACTS/work-generalukpimi5l/plans/general/tree/tmp.l3f5UaCnbZ/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/show_interfaces.yml for managed_node1 TASK [Include the task 'get_current_interfaces.yml'] *************************** Saturday 06 July 2024 06:58:36 -0400 (0:00:00.037) 0:00:04.333 ********* included: /var/ARTIFACTS/work-generalukpimi5l/plans/general/tree/tmp.l3f5UaCnbZ/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/get_current_interfaces.yml for managed_node1 TASK [Gather current interface info] ******************************************* Saturday 06 July 2024 06:58:36 -0400 (0:00:00.039) 0:00:04.373 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "ls", "-1" ], "delta": "0:00:00.003621", "end": "2024-07-06 06:58:37.245563", "rc": 0, "start": "2024-07-06 06:58:37.241942" } STDOUT: bond0 bonding_masters eth0 lo rpltstbr team0 TASK [Set current_interfaces] ************************************************** Saturday 06 July 2024 06:58:37 -0400 (0:00:00.312) 0:00:04.685 ********* ok: [managed_node1] => { "ansible_facts": { "current_interfaces": [ "bond0", "bonding_masters", "eth0", "lo", "rpltstbr", "team0" ] }, "changed": false } TASK [Show current_interfaces] ************************************************* Saturday 06 July 2024 06:58:37 -0400 (0:00:00.032) 0:00:04.717 ********* ok: [managed_node1] => {} MSG: current_interfaces: [u'bond0', u'bonding_masters', u'eth0', u'lo', u'rpltstbr', u'team0'] TASK [Include the task 'manage_test_interface.yml'] **************************** Saturday 06 July 2024 06:58:37 -0400 (0:00:00.031) 0:00:04.748 ********* included: /var/ARTIFACTS/work-generalukpimi5l/plans/general/tree/tmp.l3f5UaCnbZ/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/manage_test_interface.yml for managed_node1 TASK [Ensure state in ["present", "absent"]] *********************************** Saturday 06 July 2024 06:58:37 -0400 (0:00:00.049) 0:00:04.798 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Ensure type in ["dummy", "tap", "veth"]] ********************************* Saturday 06 July 2024 06:58:37 -0400 (0:00:00.028) 0:00:04.826 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Include the task 'show_interfaces.yml'] ********************************** Saturday 06 July 2024 06:58:37 -0400 (0:00:00.027) 0:00:04.854 ********* included: /var/ARTIFACTS/work-generalukpimi5l/plans/general/tree/tmp.l3f5UaCnbZ/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/show_interfaces.yml for managed_node1 TASK [Include the task 'get_current_interfaces.yml'] *************************** Saturday 06 July 2024 06:58:37 -0400 (0:00:00.040) 0:00:04.894 ********* included: /var/ARTIFACTS/work-generalukpimi5l/plans/general/tree/tmp.l3f5UaCnbZ/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/get_current_interfaces.yml for managed_node1 TASK [Gather current interface info] ******************************************* Saturday 06 July 2024 06:58:37 -0400 (0:00:00.038) 0:00:04.932 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "ls", "-1" ], "delta": "0:00:00.003633", "end": "2024-07-06 06:58:37.801651", "rc": 0, "start": "2024-07-06 06:58:37.798018" } STDOUT: bond0 bonding_masters eth0 lo rpltstbr team0 TASK [Set current_interfaces] ************************************************** Saturday 06 July 2024 06:58:37 -0400 (0:00:00.307) 0:00:05.240 ********* ok: [managed_node1] => { "ansible_facts": { "current_interfaces": [ "bond0", "bonding_masters", "eth0", "lo", "rpltstbr", "team0" ] }, "changed": false } TASK [Show current_interfaces] ************************************************* Saturday 06 July 2024 06:58:37 -0400 (0:00:00.031) 0:00:05.272 ********* ok: [managed_node1] => {} MSG: current_interfaces: [u'bond0', u'bonding_masters', u'eth0', u'lo', u'rpltstbr', u'team0'] TASK [Install iproute] ********************************************************* Saturday 06 July 2024 06:58:37 -0400 (0:00:00.030) 0:00:05.303 ********* ok: [managed_node1] => { "attempts": 1, "changed": false, "rc": 0, "results": [ "iproute-4.11.0-30.el7.x86_64 providing iproute is already installed" ] } TASK [Create veth interface veth0] ********************************************* Saturday 06 July 2024 06:58:38 -0400 (0:00:00.522) 0:00:05.825 ********* ok: [managed_node1] => (item=ip link add veth0 type veth peer name peerveth0) => { "ansible_loop_var": "item", "changed": false, "cmd": [ "ip", "link", "add", "veth0", "type", "veth", "peer", "name", "peerveth0" ], "delta": "0:00:00.006776", "end": "2024-07-06 06:58:38.711299", "item": "ip link add veth0 type veth peer name peerveth0", "rc": 0, "start": "2024-07-06 06:58:38.704523" } ok: [managed_node1] => (item=ip link set peerveth0 up) => { "ansible_loop_var": "item", "changed": false, "cmd": [ "ip", "link", "set", "peerveth0", "up" ], "delta": "0:00:00.009272", "end": "2024-07-06 06:58:39.015303", "item": "ip link set peerveth0 up", "rc": 0, "start": "2024-07-06 06:58:39.006031" } ok: [managed_node1] => (item=ip link set veth0 up) => { "ansible_loop_var": "item", "changed": false, "cmd": [ "ip", "link", "set", "veth0", "up" ], "delta": "0:00:00.017060", "end": "2024-07-06 06:58:39.318514", "item": "ip link set veth0 up", "rc": 0, "start": "2024-07-06 06:58:39.301454" } TASK [Set up veth as managed by NetworkManager] ******************************** Saturday 06 July 2024 06:58:39 -0400 (0:00:00.935) 0:00:06.761 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "nmcli", "d", "set", "veth0", "managed", "true" ], "delta": "0:00:00.032922", "end": "2024-07-06 06:58:39.659334", "rc": 0, "start": "2024-07-06 06:58:39.626412" } TASK [Delete veth interface veth0] ********************************************* Saturday 06 July 2024 06:58:39 -0400 (0:00:00.345) 0:00:07.106 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Create dummy interface veth0] ******************************************** Saturday 06 July 2024 06:58:39 -0400 (0:00:00.030) 0:00:07.137 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Delete dummy interface veth0] ******************************************** Saturday 06 July 2024 06:58:39 -0400 (0:00:00.030) 0:00:07.167 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Create tap interface veth0] ********************************************** Saturday 06 July 2024 06:58:39 -0400 (0:00:00.030) 0:00:07.197 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Delete tap interface veth0] ********************************************** Saturday 06 July 2024 06:58:39 -0400 (0:00:00.030) 0:00:07.228 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Set up gateway ip on veth peer] ****************************************** Saturday 06 July 2024 06:58:39 -0400 (0:00:00.029) 0:00:07.257 ********* ok: [managed_node1] => { "changed": false, "cmd": "ip netns add ns1\nip link set peerveth0 netns ns1\nip netns exec ns1 ip -6 addr add 2001:db8::1/32 dev peerveth0\nip netns exec ns1 ip link set peerveth0 up\n", "delta": "0:00:00.047695", "end": "2024-07-06 06:58:40.174455", "rc": 0, "start": "2024-07-06 06:58:40.126760" } TASK [TEST: I can configure an interface with static ipv6 config] ************** Saturday 06 July 2024 06:58:40 -0400 (0:00:00.356) 0:00:07.613 ********* ok: [managed_node1] => {} MSG: ################################################## TASK [fedora.linux_system_roles.network : Ensure ansible_facts used by role] *** Saturday 06 July 2024 06:58:40 -0400 (0:00:00.030) 0:00:07.644 ********* included: /var/ARTIFACTS/work-generalukpimi5l/plans/general/tree/tmp.l3f5UaCnbZ/ansible_collections/fedora/linux_system_roles/roles/network/tasks/set_facts.yml for managed_node1 TASK [fedora.linux_system_roles.network : Ensure ansible_facts used by role are present] *** Saturday 06 July 2024 06:58:40 -0400 (0:00:00.051) 0:00:07.695 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Check if system is ostree] *********** Saturday 06 July 2024 06:58:40 -0400 (0:00:00.031) 0:00:07.727 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Set flag to indicate system is ostree] *** Saturday 06 July 2024 06:58:40 -0400 (0:00:00.028) 0:00:07.756 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Check which services are running] **** Saturday 06 July 2024 06:58:40 -0400 (0:00:00.028) 0:00:07.784 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Check which packages are installed] *** Saturday 06 July 2024 06:58:41 -0400 (0:00:01.115) 0:00:08.900 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Print network provider] ************** Saturday 06 July 2024 06:58:43 -0400 (0:00:01.583) 0:00:10.483 ********* ok: [managed_node1] => {} MSG: Using network provider: initscripts TASK [fedora.linux_system_roles.network : Abort applying the network state configuration if using the `network_state` variable with the initscripts provider] *** Saturday 06 July 2024 06:58:43 -0400 (0:00:00.074) 0:00:10.557 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Abort applying the network state configuration if the system version of the managed host is below 8] *** Saturday 06 July 2024 06:58:43 -0400 (0:00:00.070) 0:00:10.628 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Check if updates for network packages are available through the DNF package manager due to wireless or team interfaces] *** Saturday 06 July 2024 06:58:43 -0400 (0:00:00.071) 0:00:10.700 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Check if updates for network packages are available through the YUM package manager due to wireless or team interfaces] *** Saturday 06 July 2024 06:58:43 -0400 (0:00:00.071) 0:00:10.771 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Ask user's consent to restart NetworkManager due to wireless or team interfaces] *** Saturday 06 July 2024 06:58:43 -0400 (0:00:00.079) 0:00:10.851 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Install packages] ******************** Saturday 06 July 2024 06:58:43 -0400 (0:00:00.078) 0:00:10.929 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Install NetworkManager and nmstate when using network_state variable] *** Saturday 06 July 2024 06:58:43 -0400 (0:00:00.125) 0:00:11.055 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Install python3-libnmstate when using network_state variable] *** Saturday 06 July 2024 06:58:43 -0400 (0:00:00.072) 0:00:11.128 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Restart NetworkManager due to wireless or team interfaces] *** Saturday 06 July 2024 06:58:43 -0400 (0:00:00.071) 0:00:11.199 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Enable and start NetworkManager] ***** Saturday 06 July 2024 06:58:43 -0400 (0:00:00.079) 0:00:11.279 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Enable and start wpa_supplicant] ***** Saturday 06 July 2024 06:58:43 -0400 (0:00:00.073) 0:00:11.353 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Enable network service] ************** Saturday 06 July 2024 06:58:44 -0400 (0:00:00.071) 0:00:11.424 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Ensure initscripts network file dependency is present] *** Saturday 06 July 2024 06:58:44 -0400 (0:00:00.792) 0:00:12.216 ********* ok: [managed_node1] => { "changed": false, "dest": "/etc/sysconfig/network", "src": "/root/.ansible/tmp/ansible-local-10986WrXuXV/tmp36QsUa" } TASK [fedora.linux_system_roles.network : Configure networking connection profiles] *** Saturday 06 July 2024 06:58:45 -0400 (0:00:00.389) 0:00:12.605 ********* changed: [managed_node1] => { "_invocation": { "module_args": { "__debug_flags": "", "__header": "#\n# Ansible managed\n#\n# system_role:network\n", "connections": [ { "ip": { "address": [ "2001:db8::2/32", "2001:db8::3/32", "2001:db8::4/32" ], "auto6": false, "dhcp4": false, "gateway6": "2001:db8::1" }, "name": "veth0", "state": "up", "type": "ethernet" } ], "force_state_change": false, "ignore_errors": false, "provider": "initscripts" } }, "changed": true } STDERR: [003] #0, state:up persistent_state:present, 'veth0': add ifcfg-rh profile 'veth0' [004] #0, state:up persistent_state:present, 'veth0': up connection veth0 (is-modified) [005] #0, state:up persistent_state:present, 'veth0': call 'ifup veth0': rc=0, out='INFO : [ipv6_wait_tentative] Waiting for interface veth0 IPv6 address(es) to leave the 'tentative' state INFO : [ipv6_wait_tentative] Waiting for interface veth0 IPv6 address(es) to leave the 'tentative' state ', err='' TASK [fedora.linux_system_roles.network : Configure networking state] ********** Saturday 06 July 2024 06:58:48 -0400 (0:00:02.995) 0:00:15.601 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Show stderr messages for the network_connections] *** Saturday 06 July 2024 06:58:48 -0400 (0:00:00.072) 0:00:15.674 ********* ok: [managed_node1] => { "__network_connections_result.stderr_lines": [ "[003] #0, state:up persistent_state:present, 'veth0': add ifcfg-rh profile 'veth0'", "[004] #0, state:up persistent_state:present, 'veth0': up connection veth0 (is-modified)", "[005] #0, state:up persistent_state:present, 'veth0': call 'ifup veth0': rc=0, out='INFO : [ipv6_wait_tentative] Waiting for interface veth0 IPv6 address(es) to leave the 'tentative' state", "INFO : [ipv6_wait_tentative] Waiting for interface veth0 IPv6 address(es) to leave the 'tentative' state", "', err=''" ] } TASK [fedora.linux_system_roles.network : Show debug messages for the network_connections] *** Saturday 06 July 2024 06:58:48 -0400 (0:00:00.075) 0:00:15.749 ********* ok: [managed_node1] => { "__network_connections_result": { "_invocation": { "module_args": { "__debug_flags": "", "__header": "#\n# Ansible managed\n#\n# system_role:network\n", "connections": [ { "ip": { "address": [ "2001:db8::2/32", "2001:db8::3/32", "2001:db8::4/32" ], "auto6": false, "dhcp4": false, "gateway6": "2001:db8::1" }, "name": "veth0", "state": "up", "type": "ethernet" } ], "force_state_change": false, "ignore_errors": false, "provider": "initscripts" } }, "changed": true, "failed": false, "stderr": "[003] #0, state:up persistent_state:present, 'veth0': add ifcfg-rh profile 'veth0'\n[004] #0, state:up persistent_state:present, 'veth0': up connection veth0 (is-modified)\n[005] #0, state:up persistent_state:present, 'veth0': call 'ifup veth0': rc=0, out='INFO : [ipv6_wait_tentative] Waiting for interface veth0 IPv6 address(es) to leave the 'tentative' state\nINFO : [ipv6_wait_tentative] Waiting for interface veth0 IPv6 address(es) to leave the 'tentative' state\n', err=''\n", "stderr_lines": [ "[003] #0, state:up persistent_state:present, 'veth0': add ifcfg-rh profile 'veth0'", "[004] #0, state:up persistent_state:present, 'veth0': up connection veth0 (is-modified)", "[005] #0, state:up persistent_state:present, 'veth0': call 'ifup veth0': rc=0, out='INFO : [ipv6_wait_tentative] Waiting for interface veth0 IPv6 address(es) to leave the 'tentative' state", "INFO : [ipv6_wait_tentative] Waiting for interface veth0 IPv6 address(es) to leave the 'tentative' state", "', err=''" ] } } TASK [fedora.linux_system_roles.network : Show debug messages for the network_state] *** Saturday 06 July 2024 06:58:48 -0400 (0:00:00.076) 0:00:15.826 ********* skipping: [managed_node1] => {} TASK [fedora.linux_system_roles.network : Re-test connectivity] **************** Saturday 06 July 2024 06:58:48 -0400 (0:00:00.072) 0:00:15.898 ********* ok: [managed_node1] => { "changed": false, "ping": "pong" } TASK [Include the task 'assert_device_present.yml'] **************************** Saturday 06 July 2024 06:58:49 -0400 (0:00:00.523) 0:00:16.421 ********* included: /var/ARTIFACTS/work-generalukpimi5l/plans/general/tree/tmp.l3f5UaCnbZ/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/assert_device_present.yml for managed_node1 TASK [Include the task 'get_interface_stat.yml'] ******************************* Saturday 06 July 2024 06:58:49 -0400 (0:00:00.139) 0:00:16.561 ********* included: /var/ARTIFACTS/work-generalukpimi5l/plans/general/tree/tmp.l3f5UaCnbZ/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/get_interface_stat.yml for managed_node1 TASK [Get stat for interface veth0] ******************************************** Saturday 06 July 2024 06:58:49 -0400 (0:00:00.123) 0:00:16.685 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1720263525.929408, "block_size": 4096, "blocks": 0, "ctime": 1720263518.7114215, "dev": 18, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 18668, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": true, "isreg": false, "issock": false, "isuid": false, "lnk_source": "/sys/devices/virtual/net/veth0", "lnk_target": "../../devices/virtual/net/veth0", "mode": "0777", "mtime": 1720263518.7114215, "nlink": 1, "path": "/sys/class/net/veth0", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 0, "uid": 0, "wgrp": true, "woth": true, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [Assert that the interface is present - 'veth0'] ************************** Saturday 06 July 2024 06:58:49 -0400 (0:00:00.358) 0:00:17.043 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Include the task 'assert_profile_present.yml'] *************************** Saturday 06 July 2024 06:58:49 -0400 (0:00:00.076) 0:00:17.120 ********* included: /var/ARTIFACTS/work-generalukpimi5l/plans/general/tree/tmp.l3f5UaCnbZ/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/assert_profile_present.yml for managed_node1 TASK [Include the task 'get_profile_stat.yml'] ********************************* Saturday 06 July 2024 06:58:49 -0400 (0:00:00.146) 0:00:17.266 ********* included: /var/ARTIFACTS/work-generalukpimi5l/plans/general/tree/tmp.l3f5UaCnbZ/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/get_profile_stat.yml for managed_node1 TASK [Initialize NM profile exist and ansible_managed comment flag] ************ Saturday 06 July 2024 06:58:50 -0400 (0:00:00.128) 0:00:17.394 ********* ok: [managed_node1] => { "ansible_facts": { "lsr_net_profile_ansible_managed": false, "lsr_net_profile_exists": false, "lsr_net_profile_fingerprint": false }, "changed": false } TASK [Stat profile file] ******************************************************* Saturday 06 July 2024 06:58:50 -0400 (0:00:00.073) 0:00:17.468 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1720263525.944408, "block_size": 4096, "blocks": 8, "ctime": 1720263525.9324079, "dev": 51713, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 22516, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1720263525.9324079, "nlink": 1, "path": "/etc/sysconfig/network-scripts/ifcfg-veth0", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 253, "uid": 0, "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [Set NM profile exist flag based on the profile files] ******************** Saturday 06 July 2024 06:58:50 -0400 (0:00:00.344) 0:00:17.812 ********* ok: [managed_node1] => { "ansible_facts": { "lsr_net_profile_exists": true }, "changed": false } TASK [Get NM profile info] ***************************************************** Saturday 06 July 2024 06:58:50 -0400 (0:00:00.075) 0:00:17.888 ********* fatal: [managed_node1]: FAILED! => { "changed": false, "cmd": "nmcli -f NAME,FILENAME connection show |grep veth0 | grep /etc", "delta": "0:00:00.029396", "end": "2024-07-06 06:58:50.791076", "rc": 1, "start": "2024-07-06 06:58:50.761680" } MSG: non-zero return code ...ignoring TASK [Set NM profile exist flag and ansible_managed flag true based on the nmcli output] *** Saturday 06 July 2024 06:58:50 -0400 (0:00:00.425) 0:00:18.314 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Get the ansible_managed comment in ifcfg-veth0] ************************** Saturday 06 July 2024 06:58:51 -0400 (0:00:00.074) 0:00:18.388 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "grep", "^# Ansible managed", "/etc/sysconfig/network-scripts/ifcfg-veth0" ], "delta": "0:00:00.003579", "end": "2024-07-06 06:58:51.270024", "rc": 0, "start": "2024-07-06 06:58:51.266445" } STDOUT: # Ansible managed TASK [Verify the ansible_managed comment in ifcfg-veth0] *********************** Saturday 06 July 2024 06:58:51 -0400 (0:00:00.367) 0:00:18.756 ********* ok: [managed_node1] => { "ansible_facts": { "lsr_net_profile_ansible_managed": true }, "changed": false } TASK [Get the fingerprint comment in ifcfg-veth0] ****************************** Saturday 06 July 2024 06:58:51 -0400 (0:00:00.085) 0:00:18.841 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "grep", "^# system_role:network", "/etc/sysconfig/network-scripts/ifcfg-veth0" ], "delta": "0:00:00.003613", "end": "2024-07-06 06:58:51.717459", "rc": 0, "start": "2024-07-06 06:58:51.713846" } STDOUT: # system_role:network TASK [Verify the fingerprint comment in ifcfg-veth0] *************************** Saturday 06 July 2024 06:58:51 -0400 (0:00:00.359) 0:00:19.201 ********* ok: [managed_node1] => { "ansible_facts": { "lsr_net_profile_fingerprint": true }, "changed": false } TASK [Assert that the profile is present - 'veth0'] **************************** Saturday 06 July 2024 06:58:51 -0400 (0:00:00.085) 0:00:19.287 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Assert that the ansible managed comment is present in 'veth0'] *********** Saturday 06 July 2024 06:58:51 -0400 (0:00:00.078) 0:00:19.366 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Assert that the fingerprint comment is present in veth0] ***************** Saturday 06 July 2024 06:58:52 -0400 (0:00:00.080) 0:00:19.447 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Get ip address information] ********************************************** Saturday 06 July 2024 06:58:52 -0400 (0:00:00.077) 0:00:19.524 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "ip", "addr", "show", "veth0" ], "delta": "0:00:00.003716", "end": "2024-07-06 06:58:52.409301", "rc": 0, "start": "2024-07-06 06:58:52.405585" } STDOUT: 125: veth0@if124: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 6a:58:d3:45:05:89 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 2001:db8::4/32 scope global valid_lft forever preferred_lft forever inet6 2001:db8::3/32 scope global valid_lft forever preferred_lft forever inet6 2001:db8::2/32 scope global valid_lft forever preferred_lft forever inet6 fe80::6858:d3ff:fe45:589/64 scope link valid_lft forever preferred_lft forever TASK [Show ip_addr] ************************************************************ Saturday 06 July 2024 06:58:52 -0400 (0:00:00.366) 0:00:19.891 ********* ok: [managed_node1] => { "ip_addr.stdout": "125: veth0@if124: mtu 1500 qdisc noqueue state UP group default qlen 1000\n link/ether 6a:58:d3:45:05:89 brd ff:ff:ff:ff:ff:ff link-netnsid 0\n inet6 2001:db8::4/32 scope global \n valid_lft forever preferred_lft forever\n inet6 2001:db8::3/32 scope global \n valid_lft forever preferred_lft forever\n inet6 2001:db8::2/32 scope global \n valid_lft forever preferred_lft forever\n inet6 fe80::6858:d3ff:fe45:589/64 scope link \n valid_lft forever preferred_lft forever" } TASK [Assert ipv6 addresses are correctly set] ********************************* Saturday 06 July 2024 06:58:52 -0400 (0:00:00.073) 0:00:19.964 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Get ipv6 routes] ********************************************************* Saturday 06 July 2024 06:58:52 -0400 (0:00:00.081) 0:00:20.046 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "ip", "-6", "route" ], "delta": "0:00:00.003762", "end": "2024-07-06 06:58:52.918686", "rc": 0, "start": "2024-07-06 06:58:52.914924" } STDOUT: unreachable ::/96 dev lo metric 1024 error -113 pref medium unreachable ::ffff:0.0.0.0/96 dev lo metric 1024 error -113 pref medium 2001:db8::/32 dev veth0 proto kernel metric 256 pref medium unreachable 2002:a00::/24 dev lo metric 1024 error -113 pref medium unreachable 2002:7f00::/24 dev lo metric 1024 error -113 pref medium unreachable 2002:a9fe::/32 dev lo metric 1024 error -113 pref medium unreachable 2002:ac10::/28 dev lo metric 1024 error -113 pref medium unreachable 2002:c0a8::/32 dev lo metric 1024 error -113 pref medium unreachable 2002:e000::/19 dev lo metric 1024 error -113 pref medium unreachable 3ffe:ffff::/32 dev lo metric 1024 error -113 pref medium fe80::/64 dev eth0 proto kernel metric 256 mtu 9001 pref medium fe80::/64 dev rpltstbr proto kernel metric 256 pref medium fe80::/64 dev veth0 proto kernel metric 256 pref medium default via 2001:db8::1 dev veth0 metric 1 pref medium TASK [Show ipv6_route] ********************************************************* Saturday 06 July 2024 06:58:53 -0400 (0:00:00.362) 0:00:20.409 ********* ok: [managed_node1] => { "ipv6_route.stdout": "unreachable ::/96 dev lo metric 1024 error -113 pref medium\nunreachable ::ffff:0.0.0.0/96 dev lo metric 1024 error -113 pref medium\n2001:db8::/32 dev veth0 proto kernel metric 256 pref medium\nunreachable 2002:a00::/24 dev lo metric 1024 error -113 pref medium\nunreachable 2002:7f00::/24 dev lo metric 1024 error -113 pref medium\nunreachable 2002:a9fe::/32 dev lo metric 1024 error -113 pref medium\nunreachable 2002:ac10::/28 dev lo metric 1024 error -113 pref medium\nunreachable 2002:c0a8::/32 dev lo metric 1024 error -113 pref medium\nunreachable 2002:e000::/19 dev lo metric 1024 error -113 pref medium\nunreachable 3ffe:ffff::/32 dev lo metric 1024 error -113 pref medium\nfe80::/64 dev eth0 proto kernel metric 256 mtu 9001 pref medium\nfe80::/64 dev rpltstbr proto kernel metric 256 pref medium\nfe80::/64 dev veth0 proto kernel metric 256 pref medium\ndefault via 2001:db8::1 dev veth0 metric 1 pref medium" } TASK [Assert default ipv6 route is set] **************************************** Saturday 06 July 2024 06:58:53 -0400 (0:00:00.075) 0:00:20.485 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Ensure ping6 command is present] ***************************************** Saturday 06 July 2024 06:58:53 -0400 (0:00:00.075) 0:00:20.560 ********* ok: [managed_node1] => { "changed": false, "rc": 0, "results": [ "iputils-20160308-10.el7.x86_64 providing iputils is already installed" ] } TASK [Test gateway can be pinged] ********************************************** Saturday 06 July 2024 06:58:53 -0400 (0:00:00.585) 0:00:21.146 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "ping6", "-c1", "2001:db8::1" ], "delta": "0:00:00.009943", "end": "2024-07-06 06:58:54.041297", "rc": 0, "start": "2024-07-06 06:58:54.031354" } STDOUT: PING 2001:db8::1(2001:db8::1) 56 data bytes 64 bytes from 2001:db8::1: icmp_seq=1 ttl=64 time=0.054 ms --- 2001:db8::1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms TASK [TEARDOWN: remove profiles.] ********************************************** Saturday 06 July 2024 06:58:54 -0400 (0:00:00.374) 0:00:21.520 ********* ok: [managed_node1] => {} MSG: ################################################## TASK [fedora.linux_system_roles.network : Ensure ansible_facts used by role] *** Saturday 06 July 2024 06:58:54 -0400 (0:00:00.073) 0:00:21.594 ********* included: /var/ARTIFACTS/work-generalukpimi5l/plans/general/tree/tmp.l3f5UaCnbZ/ansible_collections/fedora/linux_system_roles/roles/network/tasks/set_facts.yml for managed_node1 TASK [fedora.linux_system_roles.network : Ensure ansible_facts used by role are present] *** Saturday 06 July 2024 06:58:54 -0400 (0:00:00.133) 0:00:21.727 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Check if system is ostree] *********** Saturday 06 July 2024 06:58:54 -0400 (0:00:00.090) 0:00:21.818 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Set flag to indicate system is ostree] *** Saturday 06 July 2024 06:58:54 -0400 (0:00:00.071) 0:00:21.890 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Check which services are running] **** Saturday 06 July 2024 06:58:54 -0400 (0:00:00.071) 0:00:21.961 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Check which packages are installed] *** Saturday 06 July 2024 06:58:55 -0400 (0:00:01.016) 0:00:22.978 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Print network provider] ************** Saturday 06 July 2024 06:58:56 -0400 (0:00:01.314) 0:00:24.293 ********* ok: [managed_node1] => {} MSG: Using network provider: initscripts TASK [fedora.linux_system_roles.network : Abort applying the network state configuration if using the `network_state` variable with the initscripts provider] *** Saturday 06 July 2024 06:58:56 -0400 (0:00:00.076) 0:00:24.369 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Abort applying the network state configuration if the system version of the managed host is below 8] *** Saturday 06 July 2024 06:58:57 -0400 (0:00:00.073) 0:00:24.442 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Check if updates for network packages are available through the DNF package manager due to wireless or team interfaces] *** Saturday 06 July 2024 06:58:57 -0400 (0:00:00.073) 0:00:24.516 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Check if updates for network packages are available through the YUM package manager due to wireless or team interfaces] *** Saturday 06 July 2024 06:58:57 -0400 (0:00:00.072) 0:00:24.589 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Ask user's consent to restart NetworkManager due to wireless or team interfaces] *** Saturday 06 July 2024 06:58:57 -0400 (0:00:00.081) 0:00:24.670 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Install packages] ******************** Saturday 06 July 2024 06:58:57 -0400 (0:00:00.078) 0:00:24.748 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Install NetworkManager and nmstate when using network_state variable] *** Saturday 06 July 2024 06:58:57 -0400 (0:00:00.131) 0:00:24.880 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Install python3-libnmstate when using network_state variable] *** Saturday 06 July 2024 06:58:57 -0400 (0:00:00.074) 0:00:24.954 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Restart NetworkManager due to wireless or team interfaces] *** Saturday 06 July 2024 06:58:57 -0400 (0:00:00.072) 0:00:25.027 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Enable and start NetworkManager] ***** Saturday 06 July 2024 06:58:57 -0400 (0:00:00.079) 0:00:25.107 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Enable and start wpa_supplicant] ***** Saturday 06 July 2024 06:58:57 -0400 (0:00:00.073) 0:00:25.180 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Enable network service] ************** Saturday 06 July 2024 06:58:57 -0400 (0:00:00.073) 0:00:25.254 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Ensure initscripts network file dependency is present] *** Saturday 06 July 2024 06:58:58 -0400 (0:00:00.499) 0:00:25.753 ********* ok: [managed_node1] => { "changed": false, "dest": "/etc/sysconfig/network", "src": "/root/.ansible/tmp/ansible-local-10986WrXuXV/tmpurGxcP" } TASK [fedora.linux_system_roles.network : Configure networking connection profiles] *** Saturday 06 July 2024 06:58:58 -0400 (0:00:00.377) 0:00:26.130 ********* changed: [managed_node1] => { "_invocation": { "module_args": { "__debug_flags": "", "__header": "#\n# Ansible managed\n#\n# system_role:network\n", "connections": [ { "name": "veth0", "persistent_state": "absent", "state": "down" } ], "force_state_change": false, "ignore_errors": false, "provider": "initscripts" } }, "changed": true } STDERR: [003] #0, state:down persistent_state:absent, 'veth0': up connection veth0 (active) [004] #0, state:down persistent_state:absent, 'veth0': call 'ifdown veth0': rc=0, out='', err='' [005] #0, state:down persistent_state:absent, 'veth0': delete ifcfg-rh file '/etc/sysconfig/network-scripts/ifcfg-veth0' TASK [fedora.linux_system_roles.network : Configure networking state] ********** Saturday 06 July 2024 06:58:59 -0400 (0:00:00.693) 0:00:26.824 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Show stderr messages for the network_connections] *** Saturday 06 July 2024 06:58:59 -0400 (0:00:00.072) 0:00:26.897 ********* ok: [managed_node1] => { "__network_connections_result.stderr_lines": [ "[003] #0, state:down persistent_state:absent, 'veth0': up connection veth0 (active)", "[004] #0, state:down persistent_state:absent, 'veth0': call 'ifdown veth0': rc=0, out='', err=''", "[005] #0, state:down persistent_state:absent, 'veth0': delete ifcfg-rh file '/etc/sysconfig/network-scripts/ifcfg-veth0'" ] } TASK [fedora.linux_system_roles.network : Show debug messages for the network_connections] *** Saturday 06 July 2024 06:58:59 -0400 (0:00:00.078) 0:00:26.975 ********* ok: [managed_node1] => { "__network_connections_result": { "_invocation": { "module_args": { "__debug_flags": "", "__header": "#\n# Ansible managed\n#\n# system_role:network\n", "connections": [ { "name": "veth0", "persistent_state": "absent", "state": "down" } ], "force_state_change": false, "ignore_errors": false, "provider": "initscripts" } }, "changed": true, "failed": false, "stderr": "[003] #0, state:down persistent_state:absent, 'veth0': up connection veth0 (active)\n[004] #0, state:down persistent_state:absent, 'veth0': call 'ifdown veth0': rc=0, out='', err=''\n[005] #0, state:down persistent_state:absent, 'veth0': delete ifcfg-rh file '/etc/sysconfig/network-scripts/ifcfg-veth0'\n", "stderr_lines": [ "[003] #0, state:down persistent_state:absent, 'veth0': up connection veth0 (active)", "[004] #0, state:down persistent_state:absent, 'veth0': call 'ifdown veth0': rc=0, out='', err=''", "[005] #0, state:down persistent_state:absent, 'veth0': delete ifcfg-rh file '/etc/sysconfig/network-scripts/ifcfg-veth0'" ] } } TASK [fedora.linux_system_roles.network : Show debug messages for the network_state] *** Saturday 06 July 2024 06:58:59 -0400 (0:00:00.078) 0:00:27.053 ********* skipping: [managed_node1] => {} TASK [fedora.linux_system_roles.network : Re-test connectivity] **************** Saturday 06 July 2024 06:58:59 -0400 (0:00:00.072) 0:00:27.126 ********* ok: [managed_node1] => { "changed": false, "ping": "pong" } TASK [Include the task 'manage_test_interface.yml'] **************************** Saturday 06 July 2024 06:59:00 -0400 (0:00:00.348) 0:00:27.475 ********* included: /var/ARTIFACTS/work-generalukpimi5l/plans/general/tree/tmp.l3f5UaCnbZ/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/manage_test_interface.yml for managed_node1 TASK [Ensure state in ["present", "absent"]] *********************************** Saturday 06 July 2024 06:59:00 -0400 (0:00:00.152) 0:00:27.628 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Ensure type in ["dummy", "tap", "veth"]] ********************************* Saturday 06 July 2024 06:59:00 -0400 (0:00:00.073) 0:00:27.702 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Include the task 'show_interfaces.yml'] ********************************** Saturday 06 July 2024 06:59:00 -0400 (0:00:00.073) 0:00:27.775 ********* included: /var/ARTIFACTS/work-generalukpimi5l/plans/general/tree/tmp.l3f5UaCnbZ/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/show_interfaces.yml for managed_node1 TASK [Include the task 'get_current_interfaces.yml'] *************************** Saturday 06 July 2024 06:59:00 -0400 (0:00:00.171) 0:00:27.946 ********* included: /var/ARTIFACTS/work-generalukpimi5l/plans/general/tree/tmp.l3f5UaCnbZ/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/get_current_interfaces.yml for managed_node1 TASK [Gather current interface info] ******************************************* Saturday 06 July 2024 06:59:00 -0400 (0:00:00.124) 0:00:28.070 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "ls", "-1" ], "delta": "0:00:00.003686", "end": "2024-07-06 06:59:00.956515", "rc": 0, "start": "2024-07-06 06:59:00.952829" } STDOUT: bond0 bonding_masters eth0 lo rpltstbr team0 veth0 TASK [Set current_interfaces] ************************************************** Saturday 06 July 2024 06:59:01 -0400 (0:00:00.374) 0:00:28.445 ********* ok: [managed_node1] => { "ansible_facts": { "current_interfaces": [ "bond0", "bonding_masters", "eth0", "lo", "rpltstbr", "team0", "veth0" ] }, "changed": false } TASK [Show current_interfaces] ************************************************* Saturday 06 July 2024 06:59:01 -0400 (0:00:00.077) 0:00:28.522 ********* ok: [managed_node1] => {} MSG: current_interfaces: [u'bond0', u'bonding_masters', u'eth0', u'lo', u'rpltstbr', u'team0', u'veth0'] TASK [Install iproute] ********************************************************* Saturday 06 July 2024 06:59:01 -0400 (0:00:00.074) 0:00:28.597 ********* ok: [managed_node1] => { "attempts": 1, "changed": false, "rc": 0, "results": [ "iproute-4.11.0-30.el7.x86_64 providing iproute is already installed" ] } TASK [Create veth interface veth0] ********************************************* Saturday 06 July 2024 06:59:01 -0400 (0:00:00.632) 0:00:29.229 ********* skipping: [managed_node1] => (item=ip link add veth0 type veth peer name peerveth0) => { "ansible_loop_var": "item", "changed": false, "item": "ip link add veth0 type veth peer name peerveth0", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=ip link set peerveth0 up) => { "ansible_loop_var": "item", "changed": false, "item": "ip link set peerveth0 up", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=ip link set veth0 up) => { "ansible_loop_var": "item", "changed": false, "item": "ip link set veth0 up", "skip_reason": "Conditional result was False" } TASK [Set up veth as managed by NetworkManager] ******************************** Saturday 06 July 2024 06:59:01 -0400 (0:00:00.098) 0:00:29.328 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Delete veth interface veth0] ********************************************* Saturday 06 July 2024 06:59:02 -0400 (0:00:00.074) 0:00:29.402 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "ip", "link", "del", "veth0", "type", "veth" ], "delta": "0:00:00.008790", "end": "2024-07-06 06:59:02.290453", "rc": 0, "start": "2024-07-06 06:59:02.281663" } TASK [Create dummy interface veth0] ******************************************** Saturday 06 July 2024 06:59:02 -0400 (0:00:00.371) 0:00:29.774 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Delete dummy interface veth0] ******************************************** Saturday 06 July 2024 06:59:02 -0400 (0:00:00.074) 0:00:29.848 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Create tap interface veth0] ********************************************** Saturday 06 July 2024 06:59:02 -0400 (0:00:00.075) 0:00:29.924 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Delete tap interface veth0] ********************************************** Saturday 06 July 2024 06:59:02 -0400 (0:00:00.073) 0:00:29.997 ********* skipping: [managed_node1] => { "changed": false, "skip_reason": "Conditional result was False" } TASK [Clean up namespace] ****************************************************** Saturday 06 July 2024 06:59:02 -0400 (0:00:00.073) 0:00:30.071 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "ip", "netns", "delete", "ns1" ], "delta": "0:00:00.008504", "end": "2024-07-06 06:59:02.948445", "rc": 0, "start": "2024-07-06 06:59:02.939941" } TASK [Verify network state restored to default] ******************************** Saturday 06 July 2024 06:59:03 -0400 (0:00:00.404) 0:00:30.475 ********* included: /var/ARTIFACTS/work-generalukpimi5l/plans/general/tree/tmp.l3f5UaCnbZ/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/check_network_dns.yml for managed_node1 TASK [Check routes and DNS] **************************************************** Saturday 06 July 2024 06:59:03 -0400 (0:00:00.153) 0:00:30.629 ********* ok: [managed_node1] => { "changed": false, "cmd": "set -euo pipefail\necho IP\nip a\necho IP ROUTE\nip route\necho IP -6 ROUTE\nip -6 route\necho RESOLV\nif [ -f /etc/resolv.conf ]; then\n cat /etc/resolv.conf\nelse\n echo NO /etc/resolv.conf\n ls -alrtF /etc/resolv.* || :\nfi\n", "delta": "0:00:00.008266", "end": "2024-07-06 06:59:03.524576", "rc": 0, "start": "2024-07-06 06:59:03.516310" } STDOUT: IP 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 9001 qdisc mq state UP group default qlen 1000 link/ether 0e:28:e7:9d:f0:91 brd ff:ff:ff:ff:ff:ff inet 10.31.43.76/22 brd 10.31.43.255 scope global noprefixroute dynamic eth0 valid_lft 2836sec preferred_lft 2836sec inet6 fe80::c28:e7ff:fe9d:f091/64 scope link valid_lft forever preferred_lft forever 10: bond0: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether fe:68:9f:79:ef:a1 brd ff:ff:ff:ff:ff:ff 18: team0: mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether e2:92:3f:ac:ff:50 brd ff:ff:ff:ff:ff:ff 60: rpltstbr: mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 02:d3:90:06:94:37 brd ff:ff:ff:ff:ff:ff inet 192.0.2.72/31 brd 192.0.2.73 scope global noprefixroute rpltstbr valid_lft forever preferred_lft forever inet6 fe80::d3:90ff:fe06:9437/64 scope link valid_lft forever preferred_lft forever IP ROUTE default via 10.31.40.1 dev eth0 proto dhcp metric 100 10.31.40.0/22 dev eth0 proto kernel scope link src 10.31.43.76 metric 100 192.0.2.72/31 dev rpltstbr proto kernel scope link src 192.0.2.72 metric 425 IP -6 ROUTE unreachable ::/96 dev lo metric 1024 error -113 pref medium unreachable ::ffff:0.0.0.0/96 dev lo metric 1024 error -113 pref medium unreachable 2002:a00::/24 dev lo metric 1024 error -113 pref medium unreachable 2002:7f00::/24 dev lo metric 1024 error -113 pref medium unreachable 2002:a9fe::/32 dev lo metric 1024 error -113 pref medium unreachable 2002:ac10::/28 dev lo metric 1024 error -113 pref medium unreachable 2002:c0a8::/32 dev lo metric 1024 error -113 pref medium unreachable 2002:e000::/19 dev lo metric 1024 error -113 pref medium unreachable 3ffe:ffff::/32 dev lo metric 1024 error -113 pref medium fe80::/64 dev eth0 proto kernel metric 256 mtu 9001 pref medium fe80::/64 dev rpltstbr proto kernel metric 256 pref medium RESOLV # Generated by NetworkManager search us-east-1.aws.redhat.com nameserver 10.29.169.13 nameserver 10.29.170.12 nameserver 10.2.32.1 TASK [Verify DNS and network connectivity] ************************************* Saturday 06 July 2024 06:59:03 -0400 (0:00:00.379) 0:00:31.008 ********* ok: [managed_node1] => { "changed": false, "cmd": "set -euo pipefail\necho CHECK DNS AND CONNECTIVITY\nfor host in mirrors.fedoraproject.org mirrors.centos.org; do\n if ! getent hosts \"$host\"; then\n echo FAILED to lookup host \"$host\"\n exit 1\n fi\n if ! curl -o /dev/null https://\"$host\"; then\n echo FAILED to contact host \"$host\"\n exit 1\n fi\ndone\n", "delta": "0:00:00.444162", "end": "2024-07-06 06:59:04.355968", "rc": 0, "start": "2024-07-06 06:59:03.911806" } STDOUT: CHECK DNS AND CONNECTIVITY 2620:52:3:1:dead:beef:cafe:fed7 wildcard.fedoraproject.org mirrors.fedoraproject.org 2620:52:3:1:dead:beef:cafe:fed6 wildcard.fedoraproject.org mirrors.fedoraproject.org 2600:2701:4000:5211:dead:beef:fe:fed3 wildcard.fedoraproject.org mirrors.fedoraproject.org 2605:bc80:3010:600:dead:beef:cafe:fed9 wildcard.fedoraproject.org mirrors.fedoraproject.org 2600:1f14:fad:5c02:7c8a:72d0:1c58:c189 wildcard.fedoraproject.org mirrors.fedoraproject.org 2604:1580:fe00:0:dead:beef:cafe:fed1 wildcard.fedoraproject.org mirrors.fedoraproject.org 2600:1f14:fad:5c02:7c8a:72d0:1c58:c189 wildcard.fedoraproject.org mirrors.centos.org mirrors.fedoraproject.org 2620:52:3:1:dead:beef:cafe:fed7 wildcard.fedoraproject.org mirrors.centos.org mirrors.fedoraproject.org 2604:1580:fe00:0:dead:beef:cafe:fed1 wildcard.fedoraproject.org mirrors.centos.org mirrors.fedoraproject.org 2605:bc80:3010:600:dead:beef:cafe:fed9 wildcard.fedoraproject.org mirrors.centos.org mirrors.fedoraproject.org 2620:52:3:1:dead:beef:cafe:fed6 wildcard.fedoraproject.org mirrors.centos.org mirrors.fedoraproject.org 2600:2701:4000:5211:dead:beef:fe:fed3 wildcard.fedoraproject.org mirrors.centos.org mirrors.fedoraproject.org STDERR: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 305 100 305 0 0 1404 0 --:--:-- --:--:-- --:--:-- 1405 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 291 100 291 0 0 1509 0 --:--:-- --:--:-- --:--:-- 1515 PLAY RECAP ********************************************************************* managed_node1 : ok=85 changed=2 unreachable=0 failed=0 skipped=51 rescued=0 ignored=1 Saturday 06 July 2024 06:59:04 -0400 (0:00:00.780) 0:00:31.789 ********* =============================================================================== fedora.linux_system_roles.network : Configure networking connection profiles --- 3.00s fedora.linux_system_roles.network : Check which packages are installed --- 1.58s fedora.linux_system_roles.network : Check which packages are installed --- 1.31s fedora.linux_system_roles.network : Check which services are running ---- 1.12s fedora.linux_system_roles.network : Check which services are running ---- 1.02s Create veth interface veth0 --------------------------------------------- 0.94s Gathering Facts --------------------------------------------------------- 0.90s fedora.linux_system_roles.network : Enable network service -------------- 0.79s Verify DNS and network connectivity ------------------------------------- 0.78s fedora.linux_system_roles.network : Configure networking connection profiles --- 0.69s Install yum-utils package ----------------------------------------------- 0.69s Gathering Facts --------------------------------------------------------- 0.67s Install iproute --------------------------------------------------------- 0.63s Ensure ping6 command is present ----------------------------------------- 0.58s fedora.linux_system_roles.network : Re-test connectivity ---------------- 0.52s Install iproute --------------------------------------------------------- 0.52s fedora.linux_system_roles.network : Enable network service -------------- 0.50s Enable EPEL 7 ----------------------------------------------------------- 0.50s Gather the minimum subset of ansible_facts required by the network role test --- 0.43s Get NM profile info ----------------------------------------------------- 0.43s