[DEPRECATION WARNING]: ANSIBLE_COLLECTIONS_PATHS option, does not fit var naming standard, use the singular form ANSIBLE_COLLECTIONS_PATH instead. This feature will be removed from ansible-core in version 2.19. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. No config file found; using defaults running playbook inside collection fedora.linux_system_roles PLAY [Run playbook 'playbooks/tests_bond_options.yml' with nm as provider] ***** TASK [Gathering Facts] ********************************************************* Saturday 06 July 2024 06:45:26 -0400 (0:00:00.008) 0:00:00.008 ********* [WARNING]: Platform linux on host managed_node1 is using the discovered Python interpreter at /usr/bin/python3.12, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed_node1] TASK [Include the task 'el_repo_setup.yml'] ************************************ Saturday 06 July 2024 06:45:28 -0400 (0:00:01.286) 0:00:01.295 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/tasks/el_repo_setup.yml for managed_node1 TASK [Gather the minimum subset of ansible_facts required by the network role test] *** Saturday 06 July 2024 06:45:28 -0400 (0:00:00.024) 0:00:01.319 ********* ok: [managed_node1] TASK [Check if system is ostree] *********************************************** Saturday 06 July 2024 06:45:28 -0400 (0:00:00.565) 0:00:01.885 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [Set flag to indicate system is ostree] *********************************** Saturday 06 July 2024 06:45:29 -0400 (0:00:00.496) 0:00:02.381 ********* ok: [managed_node1] => { "ansible_facts": { "__network_is_ostree": false }, "changed": false } TASK [Fix CentOS6 Base repo] *************************************************** Saturday 06 July 2024 06:45:29 -0400 (0:00:00.026) 0:00:02.408 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "ansible_distribution == 'CentOS'", "skip_reason": "Conditional result was False" } TASK [Include the task 'enable_epel.yml'] ************************************** Saturday 06 July 2024 06:45:29 -0400 (0:00:00.016) 0:00:02.424 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/tasks/enable_epel.yml for managed_node1 TASK [Create EPEL 39] ********************************************************** Saturday 06 July 2024 06:45:29 -0400 (0:00:00.037) 0:00:02.462 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "ansible_distribution in ['RedHat', 'CentOS']", "skip_reason": "Conditional result was False" } TASK [Install yum-utils package] *********************************************** Saturday 06 July 2024 06:45:29 -0400 (0:00:00.016) 0:00:02.479 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "ansible_distribution in ['RedHat', 'CentOS']", "skip_reason": "Conditional result was False" } TASK [Enable EPEL 7] *********************************************************** Saturday 06 July 2024 06:45:29 -0400 (0:00:00.015) 0:00:02.495 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "ansible_distribution in ['RedHat', 'CentOS']", "skip_reason": "Conditional result was False" } TASK [Enable EPEL 8] *********************************************************** Saturday 06 July 2024 06:45:29 -0400 (0:00:00.017) 0:00:02.513 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "ansible_distribution in ['RedHat', 'CentOS']", "skip_reason": "Conditional result was False" } TASK [Enable EPEL 6] *********************************************************** Saturday 06 July 2024 06:45:29 -0400 (0:00:00.015) 0:00:02.529 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "ansible_distribution in ['RedHat', 'CentOS']", "skip_reason": "Conditional result was False" } TASK [Set network provider to 'nm'] ******************************************** Saturday 06 July 2024 06:45:29 -0400 (0:00:00.015) 0:00:02.544 ********* ok: [managed_node1] => { "ansible_facts": { "network_provider": "nm" }, "changed": false } PLAY [Play for testing bond options] ******************************************* TASK [Gathering Facts] ********************************************************* Saturday 06 July 2024 06:45:29 -0400 (0:00:00.039) 0:00:02.583 ********* ok: [managed_node1] TASK [Show playbook name] ****************************************************** Saturday 06 July 2024 06:45:30 -0400 (0:00:00.980) 0:00:03.564 ********* ok: [managed_node1] => {} MSG: this is: playbooks/tests_bond_options.yml TASK [Include the task 'run_test.yml'] ***************************************** Saturday 06 July 2024 06:45:30 -0400 (0:00:00.021) 0:00:03.586 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/run_test.yml for managed_node1 TASK [TEST: Given two DHCP-enabled network interfaces, when creating a bond profile with them, then the controller device and bond port profiles are present and the specified bond options are set for the controller device.] *** Saturday 06 July 2024 06:45:30 -0400 (0:00:00.030) 0:00:03.616 ********* ok: [managed_node1] => {} MSG: ########## Given two DHCP-enabled network interfaces, when creating a bond profile with them, then the controller device and bond port profiles are present and the specified bond options are set for the controller device. ########## TASK [Show item] *************************************************************** Saturday 06 July 2024 06:45:30 -0400 (0:00:00.021) 0:00:03.638 ********* ok: [managed_node1] => (item=lsr_description) => { "ansible_loop_var": "item", "item": "lsr_description", "lsr_description": "Given two DHCP-enabled network interfaces, when creating a bond profile with them, then the controller device and bond port profiles are present and the specified bond options are set for the controller device." } ok: [managed_node1] => (item=lsr_setup) => { "ansible_loop_var": "item", "item": "lsr_setup", "lsr_setup": [ "tasks/create_test_interfaces_with_dhcp.yml", "tasks/assert_dhcp_device_present.yml" ] } ok: [managed_node1] => (item=lsr_test) => { "ansible_loop_var": "item", "item": "lsr_test", "lsr_test": [ "tasks/create_bond_profile.yml" ] } ok: [managed_node1] => (item=lsr_assert) => { "ansible_loop_var": "item", "item": "lsr_assert", "lsr_assert": [ "tasks/assert_controller_device_present.yml", "tasks/assert_bond_port_profile_present.yml", "tasks/assert_bond_options.yml" ] } ok: [managed_node1] => (item=lsr_assert_when) => { "ansible_loop_var": "item", "item": "lsr_assert_when", "lsr_assert_when": "VARIABLE IS NOT DEFINED!: 'lsr_assert_when' is undefined" } ok: [managed_node1] => (item=lsr_fail_debug) => { "ansible_loop_var": "item", "item": "lsr_fail_debug", "lsr_fail_debug": [ "__network_connections_result" ] } ok: [managed_node1] => (item=lsr_cleanup) => { "ansible_loop_var": "item", "item": "lsr_cleanup", "lsr_cleanup": [ "tasks/cleanup_bond_profile+device.yml", "tasks/remove_test_interfaces_with_dhcp.yml" ] } TASK [Include the task 'show_interfaces.yml'] ********************************** Saturday 06 July 2024 06:45:30 -0400 (0:00:00.062) 0:00:03.700 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/show_interfaces.yml for managed_node1 TASK [Include the task 'get_current_interfaces.yml'] *************************** Saturday 06 July 2024 06:45:30 -0400 (0:00:00.029) 0:00:03.730 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/get_current_interfaces.yml for managed_node1 TASK [Gather current interface info] ******************************************* Saturday 06 July 2024 06:45:30 -0400 (0:00:00.024) 0:00:03.754 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "ls", "-1" ], "delta": "0:00:00.003343", "end": "2024-07-06 06:45:31.027020", "rc": 0, "start": "2024-07-06 06:45:31.023677" } STDOUT: bonding_masters eth0 lo rpltstbr team0 TASK [Set current_interfaces] ************************************************** Saturday 06 July 2024 06:45:31 -0400 (0:00:00.500) 0:00:04.254 ********* ok: [managed_node1] => { "ansible_facts": { "current_interfaces": [ "bonding_masters", "eth0", "lo", "rpltstbr", "team0" ] }, "changed": false } TASK [Show current_interfaces] ************************************************* Saturday 06 July 2024 06:45:31 -0400 (0:00:00.020) 0:00:04.275 ********* ok: [managed_node1] => {} MSG: current_interfaces: ['bonding_masters', 'eth0', 'lo', 'rpltstbr', 'team0'] TASK [Setup] ******************************************************************* Saturday 06 July 2024 06:45:31 -0400 (0:00:00.021) 0:00:04.297 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/create_test_interfaces_with_dhcp.yml for managed_node1 => (item=tasks/create_test_interfaces_with_dhcp.yml) included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/assert_dhcp_device_present.yml for managed_node1 => (item=tasks/assert_dhcp_device_present.yml) TASK [Install dnsmasq] ********************************************************* Saturday 06 July 2024 06:45:31 -0400 (0:00:00.049) 0:00:04.346 ********* ok: [managed_node1] => { "attempts": 1, "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [Install pgrep, sysctl] *************************************************** Saturday 06 July 2024 06:45:33 -0400 (0:00:01.898) 0:00:06.245 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "ansible_distribution_major_version is version('6', '<=')", "skip_reason": "Conditional result was False" } TASK [Install pgrep, sysctl] *************************************************** Saturday 06 July 2024 06:45:33 -0400 (0:00:00.021) 0:00:06.267 ********* ok: [managed_node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [Create test interfaces] ************************************************** Saturday 06 July 2024 06:45:34 -0400 (0:00:01.743) 0:00:08.011 ********* ok: [managed_node1] => { "changed": false, "cmd": "set -euxo pipefail\nexec 1>&2\nip link add test1 type veth peer name test1p\nip link add test2 type veth peer name test2p\nif [ -n \"$(pgrep NetworkManager)\" ];then\n nmcli d set test1 managed true\n nmcli d set test2 managed true\n # NetworkManager should not manage DHCP server ports\n nmcli d set test1p managed false\n nmcli d set test2p managed false\nfi\nip link set test1p up\nip link set test2p up\n\n# Create the 'testbr' - providing both 10.x ipv4 and 2620:52:0 ipv6 dhcp\nip link add name testbr type bridge forward_delay 0\nif [ -n \"$(pgrep NetworkManager)\" ];then\n # NetworkManager should not manage DHCP server ports\n nmcli d set testbr managed false\nfi\nip link set testbr up\ntimer=0\n# The while loop following is a workaround for the NM bug, which can be\n# tracked in https://bugzilla.redhat.com/show_bug.cgi?id=2079642\nwhile ! ip addr show testbr | grep -q 'inet [1-9]'\ndo\n let \"timer+=1\"\n if [ $timer -eq 30 ]; then\n echo ERROR - could not add testbr\n ip addr\n exit 1\n fi\n sleep 1\n rc=0\n ip addr add 192.0.2.1/24 dev testbr || rc=\"$?\"\n if [ \"$rc\" != 0 ]; then\n echo NOTICE - could not add testbr - error code \"$rc\"\n continue\n fi\n ip -6 addr add 2001:DB8::1/32 dev testbr || rc=\"$?\"\n if [ \"$rc\" != 0 ]; then\n echo NOTICE - could not add testbr - error code \"$rc\"\n continue\n fi\ndone\n\nif grep 'release 6' /etc/redhat-release; then\n # We need bridge-utils and radvd only in rhel6\n if ! rpm -q --quiet radvd; then yum -y install radvd; fi\n if ! rpm -q --quiet bridge-utils; then yum -y install bridge-utils; fi\n\n # We need to add iptables rule to allow dhcp request\n iptables -I INPUT -i testbr -p udp --dport 67:68 --sport 67:68 -j ACCEPT\n\n # Add test1, test2 peers into the testbr\n brctl addif testbr test1p\n brctl addif testbr test2p\n\n # in RHEL6 /run is not present\n mkdir -p /run\n\n # and dnsmasq does not support ipv6\n dnsmasq --pid-file=/run/dhcp_testbr.pid --dhcp-leasefile=/run/dhcp_testbr.lease --dhcp-range=192.0.2.1,192.0.2.254,240 --interface=testbr --bind-interfaces\n\n # start radvd for ipv6\n echo 'interface testbr {' > /etc/radvd.conf\n echo ' AdvSendAdvert on;' >> /etc/radvd.conf\n echo ' prefix 2001:DB8::/64 { ' >> /etc/radvd.conf\n echo ' AdvOnLink on; }; ' >> /etc/radvd.conf\n echo ' }; ' >> /etc/radvd.conf\n\n # enable ipv6 forwarding\n sysctl -w net.ipv6.conf.all.forwarding=1\n service radvd restart\n\nelse\n ip link set test1p master testbr\n ip link set test2p master testbr\n # Run joint DHCP4/DHCP6 server with RA enabled in veth namespace\n dnsmasq --pid-file=/run/dhcp_testbr.pid --dhcp-leasefile=/run/dhcp_testbr.lease --dhcp-range=192.0.2.1,192.0.2.254,240 --dhcp-range=2001:DB8::10,2001:DB8::1FF,slaac,64,240 --enable-ra --interface=testbr --bind-interfaces\nfi\n", "delta": "0:00:01.273666", "end": "2024-07-06 06:45:36.437690", "rc": 0, "start": "2024-07-06 06:45:35.164024" } STDERR: + exec + ip link add test1 type veth peer name test1p + ip link add test2 type veth peer name test2p ++ pgrep NetworkManager + '[' -n 59278 ']' + nmcli d set test1 managed true + nmcli d set test2 managed true + nmcli d set test1p managed false + nmcli d set test2p managed false + ip link set test1p up + ip link set test2p up + ip link add name testbr type bridge forward_delay 0 ++ pgrep NetworkManager + '[' -n 59278 ']' + nmcli d set testbr managed false + ip link set testbr up + timer=0 + ip addr show testbr + grep -q 'inet [1-9]' + let timer+=1 + '[' 1 -eq 30 ']' + sleep 1 + rc=0 + ip addr add 192.0.2.1/24 dev testbr + '[' 0 '!=' 0 ']' + ip -6 addr add 2001:DB8::1/32 dev testbr + '[' 0 '!=' 0 ']' + ip addr show testbr + grep -q 'inet [1-9]' + grep 'release 6' /etc/redhat-release + ip link set test1p master testbr + ip link set test2p master testbr + dnsmasq --pid-file=/run/dhcp_testbr.pid --dhcp-leasefile=/run/dhcp_testbr.lease --dhcp-range=192.0.2.1,192.0.2.254,240 --dhcp-range=2001:DB8::10,2001:DB8::1FF,slaac,64,240 --enable-ra --interface=testbr --bind-interfaces TASK [Include the task 'get_interface_stat.yml'] ******************************* Saturday 06 July 2024 06:45:36 -0400 (0:00:01.656) 0:00:09.667 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/get_interface_stat.yml for managed_node1 TASK [Get stat for interface test1] ******************************************** Saturday 06 July 2024 06:45:36 -0400 (0:00:00.024) 0:00:09.692 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1720262735.1694274, "block_size": 4096, "blocks": 0, "ctime": 1720262735.1694274, "dev": 23, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 44758, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": true, "isreg": false, "issock": false, "isuid": false, "lnk_source": "/sys/devices/virtual/net/test1", "lnk_target": "../../devices/virtual/net/test1", "mode": "0777", "mtime": 1720262735.1694274, "nlink": 1, "path": "/sys/class/net/test1", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 0, "uid": 0, "wgrp": true, "woth": true, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [Assert that the interface is present - 'test1'] ************************** Saturday 06 July 2024 06:45:36 -0400 (0:00:00.379) 0:00:10.072 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Include the task 'get_interface_stat.yml'] ******************************* Saturday 06 July 2024 06:45:36 -0400 (0:00:00.022) 0:00:10.094 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/get_interface_stat.yml for managed_node1 TASK [Get stat for interface test2] ******************************************** Saturday 06 July 2024 06:45:36 -0400 (0:00:00.028) 0:00:10.122 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1720262735.177371, "block_size": 4096, "blocks": 0, "ctime": 1720262735.177371, "dev": 23, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 45164, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": true, "isreg": false, "issock": false, "isuid": false, "lnk_source": "/sys/devices/virtual/net/test2", "lnk_target": "../../devices/virtual/net/test2", "mode": "0777", "mtime": 1720262735.177371, "nlink": 1, "path": "/sys/class/net/test2", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 0, "uid": 0, "wgrp": true, "woth": true, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [Assert that the interface is present - 'test2'] ************************** Saturday 06 July 2024 06:45:37 -0400 (0:00:00.386) 0:00:10.509 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Test] ******************************************************************** Saturday 06 July 2024 06:45:37 -0400 (0:00:00.024) 0:00:10.533 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/create_bond_profile.yml for managed_node1 => (item=tasks/create_bond_profile.yml) TASK [Include network role] **************************************************** Saturday 06 July 2024 06:45:37 -0400 (0:00:00.036) 0:00:10.569 ********* included: fedora.linux_system_roles.network for managed_node1 TASK [fedora.linux_system_roles.network : Ensure ansible_facts used by role] *** Saturday 06 July 2024 06:45:37 -0400 (0:00:00.053) 0:00:10.623 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/roles/network/tasks/set_facts.yml for managed_node1 TASK [fedora.linux_system_roles.network : Ensure ansible_facts used by role are present] *** Saturday 06 July 2024 06:45:37 -0400 (0:00:00.042) 0:00:10.666 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Check if system is ostree] *********** Saturday 06 July 2024 06:45:37 -0400 (0:00:00.026) 0:00:10.692 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __network_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Set flag to indicate system is ostree] *** Saturday 06 July 2024 06:45:37 -0400 (0:00:00.021) 0:00:10.714 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __network_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Check which services are running] **** Saturday 06 July 2024 06:45:37 -0400 (0:00:00.022) 0:00:10.737 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Check which packages are installed] *** Saturday 06 July 2024 06:45:39 -0400 (0:00:02.368) 0:00:13.105 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Print network provider] ************** Saturday 06 July 2024 06:45:41 -0400 (0:00:01.273) 0:00:14.379 ********* ok: [managed_node1] => {} MSG: Using network provider: nm TASK [fedora.linux_system_roles.network : Abort applying the network state configuration if using the `network_state` variable with the initscripts provider] *** Saturday 06 July 2024 06:45:41 -0400 (0:00:00.049) 0:00:14.428 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Abort applying the network state configuration if the system version of the managed host is below 8] *** Saturday 06 July 2024 06:45:41 -0400 (0:00:00.051) 0:00:14.480 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Check if updates for network packages are available through the DNF package manager due to wireless or team interfaces] *** Saturday 06 July 2024 06:45:41 -0400 (0:00:00.051) 0:00:14.532 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__network_wireless_connections_defined or __network_team_connections_defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Check if updates for network packages are available through the YUM package manager due to wireless or team interfaces] *** Saturday 06 July 2024 06:45:41 -0400 (0:00:00.071) 0:00:14.604 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "ansible_distribution_major_version | int < 8", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Ask user's consent to restart NetworkManager due to wireless or team interfaces] *** Saturday 06 July 2024 06:45:41 -0400 (0:00:00.083) 0:00:14.687 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__network_wireless_connections_defined or __network_team_connections_defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Install packages] ******************** Saturday 06 July 2024 06:45:41 -0400 (0:00:00.071) 0:00:14.759 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not network_packages is subset(ansible_facts.packages.keys())", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Install NetworkManager and nmstate when using network_state variable] *** Saturday 06 July 2024 06:45:41 -0400 (0:00:00.122) 0:00:14.881 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Install python3-libnmstate when using network_state variable] *** Saturday 06 July 2024 06:45:41 -0400 (0:00:00.051) 0:00:14.933 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Restart NetworkManager due to wireless or team interfaces] *** Saturday 06 July 2024 06:45:41 -0400 (0:00:00.052) 0:00:14.986 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__network_wireless_connections_defined or __network_team_connections_defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Enable and start NetworkManager] ***** Saturday 06 July 2024 06:45:41 -0400 (0:00:00.069) 0:00:15.056 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Enable and start wpa_supplicant] ***** Saturday 06 July 2024 06:45:42 -0400 (0:00:00.908) 0:00:15.964 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__network_wpa_supplicant_required", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Enable network service] ************** Saturday 06 July 2024 06:45:42 -0400 (0:00:00.072) 0:00:16.037 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Ensure initscripts network file dependency is present] *** Saturday 06 July 2024 06:45:42 -0400 (0:00:00.045) 0:00:16.082 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_provider == \"initscripts\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Configure networking connection profiles] *** Saturday 06 July 2024 06:45:42 -0400 (0:00:00.047) 0:00:16.130 ********* changed: [managed_node1] => { "_invocation": { "module_args": { "__debug_flags": "", "__header": "#\n# Ansible managed\n#\n# system_role:network\n", "connections": [ { "bond": { "ad_actor_sys_prio": 65535, "ad_actor_system": "00:00:5e:00:53:5d", "ad_select": "stable", "ad_user_port_key": 1023, "all_ports_active": true, "downdelay": 0, "lacp_rate": "slow", "lp_interval": 128, "miimon": 110, "min_links": 0, "mode": "802.3ad", "num_grat_arp": 64, "primary_reselect": "better", "resend_igmp": 225, "updelay": 0, "use_carrier": true, "xmit_hash_policy": "encap2+3" }, "interface_name": "nm-bond", "ip": { "route_metric4": 65535 }, "name": "bond0", "state": "up", "type": "bond" }, { "controller": "bond0", "interface_name": "test1", "name": "bond0.0", "state": "up", "type": "ethernet" }, { "controller": "bond0", "interface_name": "test2", "name": "bond0.1", "state": "up", "type": "ethernet" } ], "force_state_change": false, "ignore_errors": false, "provider": "nm" } }, "changed": true } STDERR: [007] #0, state:up persistent_state:present, 'bond0': add connection bond0, 0508b54d-bea9-485e-beb8-bfb4b9f9b03a [008] #1, state:up persistent_state:present, 'bond0.0': add connection bond0.0, d3bfc4c5-875a-40df-a3dd-a01d7858a5c3 [009] #2, state:up persistent_state:present, 'bond0.1': add connection bond0.1, 56942c73-a8f0-4110-86dc-c05fba28d47d [010] #0, state:up persistent_state:present, 'bond0': up connection bond0, 0508b54d-bea9-485e-beb8-bfb4b9f9b03a (is-modified) [011] #1, state:up persistent_state:present, 'bond0.0': up connection bond0.0, d3bfc4c5-875a-40df-a3dd-a01d7858a5c3 (not-active) [012] #2, state:up persistent_state:present, 'bond0.1': up connection bond0.1, 56942c73-a8f0-4110-86dc-c05fba28d47d (not-active) TASK [fedora.linux_system_roles.network : Configure networking state] ********** Saturday 06 July 2024 06:45:43 -0400 (0:00:00.985) 0:00:17.115 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Show stderr messages for the network_connections] *** Saturday 06 July 2024 06:45:43 -0400 (0:00:00.051) 0:00:17.167 ********* ok: [managed_node1] => { "__network_connections_result.stderr_lines": [ "[007] #0, state:up persistent_state:present, 'bond0': add connection bond0, 0508b54d-bea9-485e-beb8-bfb4b9f9b03a", "[008] #1, state:up persistent_state:present, 'bond0.0': add connection bond0.0, d3bfc4c5-875a-40df-a3dd-a01d7858a5c3", "[009] #2, state:up persistent_state:present, 'bond0.1': add connection bond0.1, 56942c73-a8f0-4110-86dc-c05fba28d47d", "[010] #0, state:up persistent_state:present, 'bond0': up connection bond0, 0508b54d-bea9-485e-beb8-bfb4b9f9b03a (is-modified)", "[011] #1, state:up persistent_state:present, 'bond0.0': up connection bond0.0, d3bfc4c5-875a-40df-a3dd-a01d7858a5c3 (not-active)", "[012] #2, state:up persistent_state:present, 'bond0.1': up connection bond0.1, 56942c73-a8f0-4110-86dc-c05fba28d47d (not-active)" ] } TASK [fedora.linux_system_roles.network : Show debug messages for the network_connections] *** Saturday 06 July 2024 06:45:44 -0400 (0:00:00.052) 0:00:17.219 ********* ok: [managed_node1] => { "__network_connections_result": { "_invocation": { "module_args": { "__debug_flags": "", "__header": "#\n# Ansible managed\n#\n# system_role:network\n", "connections": [ { "bond": { "ad_actor_sys_prio": 65535, "ad_actor_system": "00:00:5e:00:53:5d", "ad_select": "stable", "ad_user_port_key": 1023, "all_ports_active": true, "downdelay": 0, "lacp_rate": "slow", "lp_interval": 128, "miimon": 110, "min_links": 0, "mode": "802.3ad", "num_grat_arp": 64, "primary_reselect": "better", "resend_igmp": 225, "updelay": 0, "use_carrier": true, "xmit_hash_policy": "encap2+3" }, "interface_name": "nm-bond", "ip": { "route_metric4": 65535 }, "name": "bond0", "state": "up", "type": "bond" }, { "controller": "bond0", "interface_name": "test1", "name": "bond0.0", "state": "up", "type": "ethernet" }, { "controller": "bond0", "interface_name": "test2", "name": "bond0.1", "state": "up", "type": "ethernet" } ], "force_state_change": false, "ignore_errors": false, "provider": "nm" } }, "changed": true, "failed": false, "stderr": "[007] #0, state:up persistent_state:present, 'bond0': add connection bond0, 0508b54d-bea9-485e-beb8-bfb4b9f9b03a\n[008] #1, state:up persistent_state:present, 'bond0.0': add connection bond0.0, d3bfc4c5-875a-40df-a3dd-a01d7858a5c3\n[009] #2, state:up persistent_state:present, 'bond0.1': add connection bond0.1, 56942c73-a8f0-4110-86dc-c05fba28d47d\n[010] #0, state:up persistent_state:present, 'bond0': up connection bond0, 0508b54d-bea9-485e-beb8-bfb4b9f9b03a (is-modified)\n[011] #1, state:up persistent_state:present, 'bond0.0': up connection bond0.0, d3bfc4c5-875a-40df-a3dd-a01d7858a5c3 (not-active)\n[012] #2, state:up persistent_state:present, 'bond0.1': up connection bond0.1, 56942c73-a8f0-4110-86dc-c05fba28d47d (not-active)\n", "stderr_lines": [ "[007] #0, state:up persistent_state:present, 'bond0': add connection bond0, 0508b54d-bea9-485e-beb8-bfb4b9f9b03a", "[008] #1, state:up persistent_state:present, 'bond0.0': add connection bond0.0, d3bfc4c5-875a-40df-a3dd-a01d7858a5c3", "[009] #2, state:up persistent_state:present, 'bond0.1': add connection bond0.1, 56942c73-a8f0-4110-86dc-c05fba28d47d", "[010] #0, state:up persistent_state:present, 'bond0': up connection bond0, 0508b54d-bea9-485e-beb8-bfb4b9f9b03a (is-modified)", "[011] #1, state:up persistent_state:present, 'bond0.0': up connection bond0.0, d3bfc4c5-875a-40df-a3dd-a01d7858a5c3 (not-active)", "[012] #2, state:up persistent_state:present, 'bond0.1': up connection bond0.1, 56942c73-a8f0-4110-86dc-c05fba28d47d (not-active)" ] } } TASK [fedora.linux_system_roles.network : Show debug messages for the network_state] *** Saturday 06 July 2024 06:45:44 -0400 (0:00:00.052) 0:00:17.271 ********* skipping: [managed_node1] => { "false_condition": "network_state is defined" } TASK [fedora.linux_system_roles.network : Re-test connectivity] **************** Saturday 06 July 2024 06:45:44 -0400 (0:00:00.048) 0:00:17.320 ********* ok: [managed_node1] => { "changed": false, "ping": "pong" } TASK [Show result] ************************************************************* Saturday 06 July 2024 06:45:44 -0400 (0:00:00.630) 0:00:17.950 ********* ok: [managed_node1] => { "__network_connections_result": { "_invocation": { "module_args": { "__debug_flags": "", "__header": "#\n# Ansible managed\n#\n# system_role:network\n", "connections": [ { "bond": { "ad_actor_sys_prio": 65535, "ad_actor_system": "00:00:5e:00:53:5d", "ad_select": "stable", "ad_user_port_key": 1023, "all_ports_active": true, "downdelay": 0, "lacp_rate": "slow", "lp_interval": 128, "miimon": 110, "min_links": 0, "mode": "802.3ad", "num_grat_arp": 64, "primary_reselect": "better", "resend_igmp": 225, "updelay": 0, "use_carrier": true, "xmit_hash_policy": "encap2+3" }, "interface_name": "nm-bond", "ip": { "route_metric4": 65535 }, "name": "bond0", "state": "up", "type": "bond" }, { "controller": "bond0", "interface_name": "test1", "name": "bond0.0", "state": "up", "type": "ethernet" }, { "controller": "bond0", "interface_name": "test2", "name": "bond0.1", "state": "up", "type": "ethernet" } ], "force_state_change": false, "ignore_errors": false, "provider": "nm" } }, "changed": true, "failed": false, "stderr": "[007] #0, state:up persistent_state:present, 'bond0': add connection bond0, 0508b54d-bea9-485e-beb8-bfb4b9f9b03a\n[008] #1, state:up persistent_state:present, 'bond0.0': add connection bond0.0, d3bfc4c5-875a-40df-a3dd-a01d7858a5c3\n[009] #2, state:up persistent_state:present, 'bond0.1': add connection bond0.1, 56942c73-a8f0-4110-86dc-c05fba28d47d\n[010] #0, state:up persistent_state:present, 'bond0': up connection bond0, 0508b54d-bea9-485e-beb8-bfb4b9f9b03a (is-modified)\n[011] #1, state:up persistent_state:present, 'bond0.0': up connection bond0.0, d3bfc4c5-875a-40df-a3dd-a01d7858a5c3 (not-active)\n[012] #2, state:up persistent_state:present, 'bond0.1': up connection bond0.1, 56942c73-a8f0-4110-86dc-c05fba28d47d (not-active)\n", "stderr_lines": [ "[007] #0, state:up persistent_state:present, 'bond0': add connection bond0, 0508b54d-bea9-485e-beb8-bfb4b9f9b03a", "[008] #1, state:up persistent_state:present, 'bond0.0': add connection bond0.0, d3bfc4c5-875a-40df-a3dd-a01d7858a5c3", "[009] #2, state:up persistent_state:present, 'bond0.1': add connection bond0.1, 56942c73-a8f0-4110-86dc-c05fba28d47d", "[010] #0, state:up persistent_state:present, 'bond0': up connection bond0, 0508b54d-bea9-485e-beb8-bfb4b9f9b03a (is-modified)", "[011] #1, state:up persistent_state:present, 'bond0.0': up connection bond0.0, d3bfc4c5-875a-40df-a3dd-a01d7858a5c3 (not-active)", "[012] #2, state:up persistent_state:present, 'bond0.1': up connection bond0.1, 56942c73-a8f0-4110-86dc-c05fba28d47d (not-active)" ] } } TASK [Asserts] ***************************************************************** Saturday 06 July 2024 06:45:44 -0400 (0:00:00.054) 0:00:18.005 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/assert_controller_device_present.yml for managed_node1 => (item=tasks/assert_controller_device_present.yml) included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/assert_bond_port_profile_present.yml for managed_node1 => (item=tasks/assert_bond_port_profile_present.yml) included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/assert_bond_options.yml for managed_node1 => (item=tasks/assert_bond_options.yml) TASK [Include the task 'get_interface_stat.yml'] ******************************* Saturday 06 July 2024 06:45:44 -0400 (0:00:00.106) 0:00:18.112 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/get_interface_stat.yml for managed_node1 TASK [Get stat for interface nm-bond] ****************************************** Saturday 06 July 2024 06:45:45 -0400 (0:00:00.078) 0:00:18.190 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1720262743.6751568, "block_size": 4096, "blocks": 0, "ctime": 1720262743.6751568, "dev": 23, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 45566, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": true, "isreg": false, "issock": false, "isuid": false, "lnk_source": "/sys/devices/virtual/net/nm-bond", "lnk_target": "../../devices/virtual/net/nm-bond", "mode": "0777", "mtime": 1720262743.6751568, "nlink": 1, "path": "/sys/class/net/nm-bond", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 0, "uid": 0, "wgrp": true, "woth": true, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [Assert that the interface is present - 'nm-bond'] ************************ Saturday 06 July 2024 06:45:45 -0400 (0:00:00.413) 0:00:18.604 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Include the task 'assert_profile_present.yml'] *************************** Saturday 06 July 2024 06:45:45 -0400 (0:00:00.052) 0:00:18.657 ********* [WARNING]: TASK: Include the task 'assert_profile_present.yml': The loop variable 'item' is already in use. You should set the `loop_var` value in the `loop_control` option for the task to something else to avoid variable collisions and unexpected behavior. included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/assert_profile_present.yml for managed_node1 => (item=bond0) included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/assert_profile_present.yml for managed_node1 => (item=bond0.0) included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/assert_profile_present.yml for managed_node1 => (item=bond0.1) TASK [Include the task 'get_profile_stat.yml'] ********************************* Saturday 06 July 2024 06:45:45 -0400 (0:00:00.097) 0:00:18.754 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/get_profile_stat.yml for managed_node1 TASK [Initialize NM profile exist and ansible_managed comment flag] ************ Saturday 06 July 2024 06:45:45 -0400 (0:00:00.090) 0:00:18.845 ********* ok: [managed_node1] => { "ansible_facts": { "lsr_net_profile_ansible_managed": false, "lsr_net_profile_exists": false, "lsr_net_profile_fingerprint": false }, "changed": false } TASK [Stat profile file] ******************************************************* Saturday 06 July 2024 06:45:45 -0400 (0:00:00.048) 0:00:18.894 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [Set NM profile exist flag based on the profile files] ******************** Saturday 06 July 2024 06:45:46 -0400 (0:00:00.412) 0:00:19.307 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "profile_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [Get NM profile info] ***************************************************** Saturday 06 July 2024 06:45:46 -0400 (0:00:00.082) 0:00:19.389 ********* ok: [managed_node1] => { "changed": false, "cmd": "nmcli -f NAME,FILENAME connection show |grep bond0 | grep /etc", "delta": "0:00:00.026357", "end": "2024-07-06 06:45:46.566626", "rc": 0, "start": "2024-07-06 06:45:46.540269" } STDOUT: bond0 /etc/NetworkManager/system-connections/bond0.nmconnection bond0.0 /etc/NetworkManager/system-connections/bond0.0.nmconnection bond0.1 /etc/NetworkManager/system-connections/bond0.1.nmconnection TASK [Set NM profile exist flag and ansible_managed flag true based on the nmcli output] *** Saturday 06 July 2024 06:45:46 -0400 (0:00:00.431) 0:00:19.820 ********* ok: [managed_node1] => { "ansible_facts": { "lsr_net_profile_ansible_managed": true, "lsr_net_profile_exists": true, "lsr_net_profile_fingerprint": true }, "changed": false } TASK [Get the ansible_managed comment in ifcfg-bond0] ************************** Saturday 06 July 2024 06:45:46 -0400 (0:00:00.049) 0:00:19.870 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "profile_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [Verify the ansible_managed comment in ifcfg-bond0] *********************** Saturday 06 July 2024 06:45:46 -0400 (0:00:00.047) 0:00:19.918 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "profile_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [Get the fingerprint comment in ifcfg-bond0] ****************************** Saturday 06 July 2024 06:45:46 -0400 (0:00:00.047) 0:00:19.966 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "profile_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [Verify the fingerprint comment in ifcfg-bond0] *************************** Saturday 06 July 2024 06:45:46 -0400 (0:00:00.045) 0:00:20.011 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "profile_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [Assert that the profile is present - 'bond0'] **************************** Saturday 06 July 2024 06:45:46 -0400 (0:00:00.046) 0:00:20.057 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Assert that the ansible managed comment is present in 'bond0'] *********** Saturday 06 July 2024 06:45:46 -0400 (0:00:00.055) 0:00:20.113 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Assert that the fingerprint comment is present in bond0] ***************** Saturday 06 July 2024 06:45:46 -0400 (0:00:00.052) 0:00:20.165 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Include the task 'get_profile_stat.yml'] ********************************* Saturday 06 July 2024 06:45:47 -0400 (0:00:00.052) 0:00:20.218 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/get_profile_stat.yml for managed_node1 TASK [Initialize NM profile exist and ansible_managed comment flag] ************ Saturday 06 July 2024 06:45:47 -0400 (0:00:00.082) 0:00:20.301 ********* ok: [managed_node1] => { "ansible_facts": { "lsr_net_profile_ansible_managed": false, "lsr_net_profile_exists": false, "lsr_net_profile_fingerprint": false }, "changed": false } TASK [Stat profile file] ******************************************************* Saturday 06 July 2024 06:45:47 -0400 (0:00:00.049) 0:00:20.350 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [Set NM profile exist flag based on the profile files] ******************** Saturday 06 July 2024 06:45:47 -0400 (0:00:00.405) 0:00:20.756 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "profile_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [Get NM profile info] ***************************************************** Saturday 06 July 2024 06:45:47 -0400 (0:00:00.044) 0:00:20.800 ********* ok: [managed_node1] => { "changed": false, "cmd": "nmcli -f NAME,FILENAME connection show |grep bond0.0 | grep /etc", "delta": "0:00:00.029180", "end": "2024-07-06 06:45:47.989169", "rc": 0, "start": "2024-07-06 06:45:47.959989" } STDOUT: bond0.0 /etc/NetworkManager/system-connections/bond0.0.nmconnection TASK [Set NM profile exist flag and ansible_managed flag true based on the nmcli output] *** Saturday 06 July 2024 06:45:48 -0400 (0:00:00.479) 0:00:21.280 ********* ok: [managed_node1] => { "ansible_facts": { "lsr_net_profile_ansible_managed": true, "lsr_net_profile_exists": true, "lsr_net_profile_fingerprint": true }, "changed": false } TASK [Get the ansible_managed comment in ifcfg-bond0.0] ************************ Saturday 06 July 2024 06:45:48 -0400 (0:00:00.051) 0:00:21.332 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "profile_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [Verify the ansible_managed comment in ifcfg-bond0.0] ********************* Saturday 06 July 2024 06:45:48 -0400 (0:00:00.048) 0:00:21.380 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "profile_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [Get the fingerprint comment in ifcfg-bond0.0] **************************** Saturday 06 July 2024 06:45:48 -0400 (0:00:00.048) 0:00:21.428 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "profile_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [Verify the fingerprint comment in ifcfg-bond0.0] ************************* Saturday 06 July 2024 06:45:48 -0400 (0:00:00.046) 0:00:21.475 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "profile_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [Assert that the profile is present - 'bond0.0'] ************************** Saturday 06 July 2024 06:45:48 -0400 (0:00:00.044) 0:00:21.519 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Assert that the ansible managed comment is present in 'bond0.0'] ********* Saturday 06 July 2024 06:45:48 -0400 (0:00:00.053) 0:00:21.573 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Assert that the fingerprint comment is present in bond0.0] *************** Saturday 06 July 2024 06:45:48 -0400 (0:00:00.054) 0:00:21.628 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Include the task 'get_profile_stat.yml'] ********************************* Saturday 06 July 2024 06:45:48 -0400 (0:00:00.054) 0:00:21.682 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/get_profile_stat.yml for managed_node1 TASK [Initialize NM profile exist and ansible_managed comment flag] ************ Saturday 06 July 2024 06:45:48 -0400 (0:00:00.087) 0:00:21.769 ********* ok: [managed_node1] => { "ansible_facts": { "lsr_net_profile_ansible_managed": false, "lsr_net_profile_exists": false, "lsr_net_profile_fingerprint": false }, "changed": false } TASK [Stat profile file] ******************************************************* Saturday 06 July 2024 06:45:48 -0400 (0:00:00.048) 0:00:21.817 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [Set NM profile exist flag based on the profile files] ******************** Saturday 06 July 2024 06:45:49 -0400 (0:00:00.419) 0:00:22.237 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "profile_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [Get NM profile info] ***************************************************** Saturday 06 July 2024 06:45:49 -0400 (0:00:00.044) 0:00:22.282 ********* ok: [managed_node1] => { "changed": false, "cmd": "nmcli -f NAME,FILENAME connection show |grep bond0.1 | grep /etc", "delta": "0:00:00.027557", "end": "2024-07-06 06:45:49.467224", "rc": 0, "start": "2024-07-06 06:45:49.439667" } STDOUT: bond0.1 /etc/NetworkManager/system-connections/bond0.1.nmconnection TASK [Set NM profile exist flag and ansible_managed flag true based on the nmcli output] *** Saturday 06 July 2024 06:45:49 -0400 (0:00:00.440) 0:00:22.722 ********* ok: [managed_node1] => { "ansible_facts": { "lsr_net_profile_ansible_managed": true, "lsr_net_profile_exists": true, "lsr_net_profile_fingerprint": true }, "changed": false } TASK [Get the ansible_managed comment in ifcfg-bond0.1] ************************ Saturday 06 July 2024 06:45:49 -0400 (0:00:00.096) 0:00:22.819 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "profile_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [Verify the ansible_managed comment in ifcfg-bond0.1] ********************* Saturday 06 July 2024 06:45:49 -0400 (0:00:00.047) 0:00:22.867 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "profile_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [Get the fingerprint comment in ifcfg-bond0.1] **************************** Saturday 06 July 2024 06:45:49 -0400 (0:00:00.050) 0:00:22.917 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "profile_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [Verify the fingerprint comment in ifcfg-bond0.1] ************************* Saturday 06 July 2024 06:45:49 -0400 (0:00:00.048) 0:00:22.965 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "profile_stat.stat.exists", "skip_reason": "Conditional result was False" } TASK [Assert that the profile is present - 'bond0.1'] ************************** Saturday 06 July 2024 06:45:49 -0400 (0:00:00.046) 0:00:23.012 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Assert that the ansible managed comment is present in 'bond0.1'] ********* Saturday 06 July 2024 06:45:49 -0400 (0:00:00.052) 0:00:23.064 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Assert that the fingerprint comment is present in bond0.1] *************** Saturday 06 July 2024 06:45:49 -0400 (0:00:00.054) 0:00:23.119 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [** TEST check bond settings] ********************************************* Saturday 06 July 2024 06:45:49 -0400 (0:00:00.051) 0:00:23.171 ********* [WARNING]: TASK: ** TEST check bond settings: The loop variable 'item' is already in use. You should set the `loop_var` value in the `loop_control` option for the task to something else to avoid variable collisions and unexpected behavior. [WARNING]: conditional statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: '{{ item.value }}' in result.stdout ok: [managed_node1] => (item={'key': 'mode', 'value': '802.3ad'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/mode" ], "delta": "0:00:00.003114", "end": "2024-07-06 06:45:50.331012", "item": { "key": "mode", "value": "802.3ad" }, "rc": 0, "start": "2024-07-06 06:45:50.327898" } STDOUT: 802.3ad 4 ok: [managed_node1] => (item={'key': 'ad_actor_sys_prio', 'value': '65535'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/ad_actor_sys_prio" ], "delta": "0:00:00.003255", "end": "2024-07-06 06:45:50.698952", "item": { "key": "ad_actor_sys_prio", "value": "65535" }, "rc": 0, "start": "2024-07-06 06:45:50.695697" } STDOUT: 65535 ok: [managed_node1] => (item={'key': 'ad_actor_system', 'value': '00:00:5e:00:53:5d'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/ad_actor_system" ], "delta": "0:00:00.002928", "end": "2024-07-06 06:45:51.062274", "item": { "key": "ad_actor_system", "value": "00:00:5e:00:53:5d" }, "rc": 0, "start": "2024-07-06 06:45:51.059346" } STDOUT: 00:00:5e:00:53:5d ok: [managed_node1] => (item={'key': 'ad_select', 'value': 'stable'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/ad_select" ], "delta": "0:00:00.002967", "end": "2024-07-06 06:45:51.428518", "item": { "key": "ad_select", "value": "stable" }, "rc": 0, "start": "2024-07-06 06:45:51.425551" } STDOUT: stable 0 ok: [managed_node1] => (item={'key': 'ad_user_port_key', 'value': '1023'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/ad_user_port_key" ], "delta": "0:00:00.003368", "end": "2024-07-06 06:45:51.796369", "item": { "key": "ad_user_port_key", "value": "1023" }, "rc": 0, "start": "2024-07-06 06:45:51.793001" } STDOUT: 1023 ok: [managed_node1] => (item={'key': 'all_slaves_active', 'value': '1'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/all_slaves_active" ], "delta": "0:00:01.005311", "end": "2024-07-06 06:45:53.168566", "item": { "key": "all_slaves_active", "value": "1" }, "rc": 0, "start": "2024-07-06 06:45:52.163255" } STDOUT: 1 ok: [managed_node1] => (item={'key': 'downdelay', 'value': '0'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/downdelay" ], "delta": "0:00:00.005607", "end": "2024-07-06 06:45:53.546967", "item": { "key": "downdelay", "value": "0" }, "rc": 0, "start": "2024-07-06 06:45:53.541360" } STDOUT: 0 ok: [managed_node1] => (item={'key': 'lacp_rate', 'value': 'slow'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/lacp_rate" ], "delta": "0:00:00.003017", "end": "2024-07-06 06:45:53.916562", "item": { "key": "lacp_rate", "value": "slow" }, "rc": 0, "start": "2024-07-06 06:45:53.913545" } STDOUT: slow 0 ok: [managed_node1] => (item={'key': 'lp_interval', 'value': '128'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/lp_interval" ], "delta": "0:00:01.004769", "end": "2024-07-06 06:45:55.286095", "item": { "key": "lp_interval", "value": "128" }, "rc": 0, "start": "2024-07-06 06:45:54.281326" } STDOUT: 128 ok: [managed_node1] => (item={'key': 'miimon', 'value': '110'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/miimon" ], "delta": "0:00:00.003188", "end": "2024-07-06 06:45:55.655414", "item": { "key": "miimon", "value": "110" }, "rc": 0, "start": "2024-07-06 06:45:55.652226" } STDOUT: 110 ok: [managed_node1] => (item={'key': 'num_grat_arp', 'value': '64'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/num_grat_arp" ], "delta": "0:00:00.003122", "end": "2024-07-06 06:45:56.020952", "item": { "key": "num_grat_arp", "value": "64" }, "rc": 0, "start": "2024-07-06 06:45:56.017830" } STDOUT: 64 ok: [managed_node1] => (item={'key': 'resend_igmp', 'value': '225'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/resend_igmp" ], "delta": "0:00:00.003052", "end": "2024-07-06 06:45:56.391270", "item": { "key": "resend_igmp", "value": "225" }, "rc": 0, "start": "2024-07-06 06:45:56.388218" } STDOUT: 225 ok: [managed_node1] => (item={'key': 'updelay', 'value': '0'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/updelay" ], "delta": "0:00:00.003209", "end": "2024-07-06 06:45:56.758573", "item": { "key": "updelay", "value": "0" }, "rc": 0, "start": "2024-07-06 06:45:56.755364" } STDOUT: 0 ok: [managed_node1] => (item={'key': 'use_carrier', 'value': '1'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/use_carrier" ], "delta": "0:00:01.004078", "end": "2024-07-06 06:45:58.127902", "item": { "key": "use_carrier", "value": "1" }, "rc": 0, "start": "2024-07-06 06:45:57.123824" } STDOUT: 1 ok: [managed_node1] => (item={'key': 'xmit_hash_policy', 'value': 'encap2+3'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/xmit_hash_policy" ], "delta": "0:00:00.002973", "end": "2024-07-06 06:45:58.498295", "item": { "key": "xmit_hash_policy", "value": "encap2+3" }, "rc": 0, "start": "2024-07-06 06:45:58.495322" } STDOUT: encap2+3 3 TASK [Include the task 'assert_IPv4_present.yml'] ****************************** Saturday 06 July 2024 06:45:58 -0400 (0:00:08.589) 0:00:31.761 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/assert_IPv4_present.yml for managed_node1 TASK [** TEST check IPv4] ****************************************************** Saturday 06 July 2024 06:45:58 -0400 (0:00:00.077) 0:00:31.838 ********* [WARNING]: conditional statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: '{{ address }}' in result.stdout ok: [managed_node1] => { "attempts": 1, "changed": false, "cmd": [ "ip", "-4", "a", "s", "nm-bond" ], "delta": "0:00:00.005938", "end": "2024-07-06 06:45:59.005487", "rc": 0, "start": "2024-07-06 06:45:58.999549" } STDOUT: 67: nm-bond: mtu 1500 qdisc noqueue state UP group default qlen 1000 inet 192.0.2.91/24 brd 192.0.2.255 scope global dynamic noprefixroute nm-bond valid_lft 228sec preferred_lft 228sec TASK [Include the task 'assert_IPv6_present.yml'] ****************************** Saturday 06 July 2024 06:45:59 -0400 (0:00:00.423) 0:00:32.262 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/assert_IPv6_present.yml for managed_node1 TASK [** TEST check IPv6] ****************************************************** Saturday 06 July 2024 06:45:59 -0400 (0:00:00.081) 0:00:32.344 ********* ok: [managed_node1] => { "attempts": 1, "changed": false, "cmd": [ "ip", "-6", "a", "s", "nm-bond" ], "delta": "0:00:00.003622", "end": "2024-07-06 06:45:59.500901", "rc": 0, "start": "2024-07-06 06:45:59.497279" } STDOUT: 67: nm-bond: mtu 1500 qdisc noqueue state UP group default qlen 1000 inet6 2001:db8::1e2/128 scope global dynamic noprefixroute valid_lft 228sec preferred_lft 228sec inet6 2001:db8::2054:342c:ea0b:f03e/64 scope global dynamic noprefixroute valid_lft 1794sec preferred_lft 1794sec inet6 fe80::521a:d30d:1ebb:9d9c/64 scope link noprefixroute valid_lft forever preferred_lft forever TASK [Conditional asserts] ***************************************************** Saturday 06 July 2024 06:45:59 -0400 (0:00:00.413) 0:00:32.757 ********* skipping: [managed_node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [Success in test 'Given two DHCP-enabled network interfaces, when creating a bond profile with them, then the controller device and bond port profiles are present and the specified bond options are set for the controller device.'] *** Saturday 06 July 2024 06:45:59 -0400 (0:00:00.092) 0:00:32.849 ********* ok: [managed_node1] => {} MSG: +++++ Success in test 'Given two DHCP-enabled network interfaces, when creating a bond profile with them, then the controller device and bond port profiles are present and the specified bond options are set for the controller device.' +++++ TASK [Cleanup] ***************************************************************** Saturday 06 July 2024 06:45:59 -0400 (0:00:00.051) 0:00:32.901 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/cleanup_bond_profile+device.yml for managed_node1 => (item=tasks/cleanup_bond_profile+device.yml) included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/remove_test_interfaces_with_dhcp.yml for managed_node1 => (item=tasks/remove_test_interfaces_with_dhcp.yml) TASK [fedora.linux_system_roles.network : Ensure ansible_facts used by role] *** Saturday 06 July 2024 06:45:59 -0400 (0:00:00.128) 0:00:33.029 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/roles/network/tasks/set_facts.yml for managed_node1 TASK [fedora.linux_system_roles.network : Ensure ansible_facts used by role are present] *** Saturday 06 July 2024 06:45:59 -0400 (0:00:00.094) 0:00:33.124 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Check if system is ostree] *********** Saturday 06 July 2024 06:46:00 -0400 (0:00:00.062) 0:00:33.187 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __network_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Set flag to indicate system is ostree] *** Saturday 06 July 2024 06:46:00 -0400 (0:00:00.052) 0:00:33.239 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __network_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Check which services are running] **** Saturday 06 July 2024 06:46:00 -0400 (0:00:00.053) 0:00:33.293 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Check which packages are installed] *** Saturday 06 July 2024 06:46:02 -0400 (0:00:02.363) 0:00:35.656 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Print network provider] ************** Saturday 06 July 2024 06:46:03 -0400 (0:00:01.021) 0:00:36.677 ********* ok: [managed_node1] => {} MSG: Using network provider: nm TASK [fedora.linux_system_roles.network : Abort applying the network state configuration if using the `network_state` variable with the initscripts provider] *** Saturday 06 July 2024 06:46:03 -0400 (0:00:00.050) 0:00:36.728 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Abort applying the network state configuration if the system version of the managed host is below 8] *** Saturday 06 July 2024 06:46:03 -0400 (0:00:00.089) 0:00:36.818 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Check if updates for network packages are available through the DNF package manager due to wireless or team interfaces] *** Saturday 06 July 2024 06:46:03 -0400 (0:00:00.051) 0:00:36.870 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__network_wireless_connections_defined or __network_team_connections_defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Check if updates for network packages are available through the YUM package manager due to wireless or team interfaces] *** Saturday 06 July 2024 06:46:03 -0400 (0:00:00.071) 0:00:36.941 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "ansible_distribution_major_version | int < 8", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Ask user's consent to restart NetworkManager due to wireless or team interfaces] *** Saturday 06 July 2024 06:46:03 -0400 (0:00:00.054) 0:00:36.996 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__network_wireless_connections_defined or __network_team_connections_defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Install packages] ******************** Saturday 06 July 2024 06:46:03 -0400 (0:00:00.065) 0:00:37.062 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not network_packages is subset(ansible_facts.packages.keys())", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Install NetworkManager and nmstate when using network_state variable] *** Saturday 06 July 2024 06:46:04 -0400 (0:00:00.110) 0:00:37.172 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Install python3-libnmstate when using network_state variable] *** Saturday 06 July 2024 06:46:04 -0400 (0:00:00.052) 0:00:37.225 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Restart NetworkManager due to wireless or team interfaces] *** Saturday 06 July 2024 06:46:04 -0400 (0:00:00.051) 0:00:37.277 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__network_wireless_connections_defined or __network_team_connections_defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Enable and start NetworkManager] ***** Saturday 06 July 2024 06:46:04 -0400 (0:00:00.066) 0:00:37.344 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Enable and start wpa_supplicant] ***** Saturday 06 July 2024 06:46:04 -0400 (0:00:00.656) 0:00:38.000 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__network_wpa_supplicant_required", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Enable network service] ************** Saturday 06 July 2024 06:46:04 -0400 (0:00:00.067) 0:00:38.068 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Ensure initscripts network file dependency is present] *** Saturday 06 July 2024 06:46:04 -0400 (0:00:00.045) 0:00:38.113 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_provider == \"initscripts\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Configure networking connection profiles] *** Saturday 06 July 2024 06:46:04 -0400 (0:00:00.048) 0:00:38.162 ********* changed: [managed_node1] => { "_invocation": { "module_args": { "__debug_flags": "", "__header": "#\n# Ansible managed\n#\n# system_role:network\n", "connections": [ { "name": "bond0.1", "persistent_state": "absent", "state": "down" }, { "name": "bond0.0", "persistent_state": "absent", "state": "down" }, { "name": "bond0", "persistent_state": "absent", "state": "down" } ], "force_state_change": false, "ignore_errors": false, "provider": "nm" } }, "changed": true } STDERR: TASK [fedora.linux_system_roles.network : Configure networking state] ********** Saturday 06 July 2024 06:46:06 -0400 (0:00:01.025) 0:00:39.187 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Show stderr messages for the network_connections] *** Saturday 06 July 2024 06:46:06 -0400 (0:00:00.050) 0:00:39.238 ********* ok: [managed_node1] => { "__network_connections_result.stderr_lines": [ "" ] } TASK [fedora.linux_system_roles.network : Show debug messages for the network_connections] *** Saturday 06 July 2024 06:46:06 -0400 (0:00:00.051) 0:00:39.289 ********* ok: [managed_node1] => { "__network_connections_result": { "_invocation": { "module_args": { "__debug_flags": "", "__header": "#\n# Ansible managed\n#\n# system_role:network\n", "connections": [ { "name": "bond0.1", "persistent_state": "absent", "state": "down" }, { "name": "bond0.0", "persistent_state": "absent", "state": "down" }, { "name": "bond0", "persistent_state": "absent", "state": "down" } ], "force_state_change": false, "ignore_errors": false, "provider": "nm" } }, "changed": true, "failed": false, "stderr": "\n", "stderr_lines": [ "" ] } } TASK [fedora.linux_system_roles.network : Show debug messages for the network_state] *** Saturday 06 July 2024 06:46:06 -0400 (0:00:00.091) 0:00:39.381 ********* skipping: [managed_node1] => { "false_condition": "network_state is defined" } TASK [fedora.linux_system_roles.network : Re-test connectivity] **************** Saturday 06 July 2024 06:46:06 -0400 (0:00:00.051) 0:00:39.433 ********* ok: [managed_node1] => { "changed": false, "ping": "pong" } TASK [Delete the device 'nm-bond'] ********************************************* Saturday 06 July 2024 06:46:06 -0400 (0:00:00.432) 0:00:39.865 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "ip", "link", "del", "nm-bond" ], "delta": "0:00:00.007799", "end": "2024-07-06 06:46:07.027295", "failed_when_result": false, "rc": 1, "start": "2024-07-06 06:46:07.019496" } STDERR: Cannot find device "nm-bond" MSG: non-zero return code TASK [Remove test interfaces] ************************************************** Saturday 06 July 2024 06:46:07 -0400 (0:00:00.418) 0:00:40.283 ********* ok: [managed_node1] => { "changed": false, "cmd": "set -euxo pipefail\nexec 1>&2\nrc=0\nip link delete test1 || rc=\"$?\"\nif [ \"$rc\" != 0 ]; then\n echo ERROR - could not delete link test1 - error \"$rc\"\nfi\nip link delete test2 || rc=\"$?\"\nif [ \"$rc\" != 0 ]; then\n echo ERROR - could not delete link test2 - error \"$rc\"\nfi\nip link delete testbr || rc=\"$?\"\nif [ \"$rc\" != 0 ]; then\n echo ERROR - could not delete link testbr - error \"$rc\"\nfi\n", "delta": "0:00:00.041414", "end": "2024-07-06 06:46:07.480305", "rc": 0, "start": "2024-07-06 06:46:07.438891" } STDERR: + exec + rc=0 + ip link delete test1 + '[' 0 '!=' 0 ']' + ip link delete test2 + '[' 0 '!=' 0 ']' + ip link delete testbr + '[' 0 '!=' 0 ']' TASK [Stop dnsmasq/radvd services] ********************************************* Saturday 06 July 2024 06:46:07 -0400 (0:00:00.449) 0:00:40.733 ********* ok: [managed_node1] => { "changed": false, "cmd": "set -uxo pipefail\nexec 1>&2\npkill -F /run/dhcp_testbr.pid\nrm -rf /run/dhcp_testbr.pid\nrm -rf /run/dhcp_testbr.lease\nif grep 'release 6' /etc/redhat-release; then\n # Stop radvd server\n service radvd stop\n iptables -D INPUT -i testbr -p udp --dport 67:68 --sport 67:68 -j ACCEPT\nfi\n", "delta": "0:00:00.016133", "end": "2024-07-06 06:46:07.902574", "rc": 0, "start": "2024-07-06 06:46:07.886441" } STDERR: + exec + pkill -F /run/dhcp_testbr.pid + rm -rf /run/dhcp_testbr.pid + rm -rf /run/dhcp_testbr.lease + grep 'release 6' /etc/redhat-release TASK [Reset bond options to assert] ******************************************** Saturday 06 July 2024 06:46:07 -0400 (0:00:00.422) 0:00:41.155 ********* ok: [managed_node1] => { "ansible_facts": { "bond_options_to_assert": [ { "key": "mode", "value": "active-backup" }, { "key": "arp_interval", "value": "60" }, { "key": "arp_ip_target", "value": "192.0.2.128" }, { "key": "arp_validate", "value": "none" }, { "key": "primary", "value": "test1" } ] }, "changed": false } TASK [Include the task 'run_test.yml'] ***************************************** Saturday 06 July 2024 06:46:08 -0400 (0:00:00.049) 0:00:41.205 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/run_test.yml for managed_node1 TASK [TEST: Given two DHCP-enabled network interfaces, when creating a bond profile with them, then the controller device and bond port profiles are present and the specified bond options are set for the controller device.] *** Saturday 06 July 2024 06:46:08 -0400 (0:00:00.082) 0:00:41.287 ********* ok: [managed_node1] => {} MSG: ########## Given two DHCP-enabled network interfaces, when creating a bond profile with them, then the controller device and bond port profiles are present and the specified bond options are set for the controller device. ########## TASK [Show item] *************************************************************** Saturday 06 July 2024 06:46:08 -0400 (0:00:00.051) 0:00:41.339 ********* ok: [managed_node1] => (item=lsr_description) => { "ansible_loop_var": "item", "item": "lsr_description", "lsr_description": "Given two DHCP-enabled network interfaces, when creating a bond profile with them, then the controller device and bond port profiles are present and the specified bond options are set for the controller device." } ok: [managed_node1] => (item=lsr_setup) => { "ansible_loop_var": "item", "item": "lsr_setup", "lsr_setup": [ "tasks/create_test_interfaces_with_dhcp.yml", "tasks/assert_dhcp_device_present.yml" ] } ok: [managed_node1] => (item=lsr_test) => { "ansible_loop_var": "item", "item": "lsr_test", "lsr_test": [ "tasks/create_bond_profile_reconfigure.yml" ] } ok: [managed_node1] => (item=lsr_assert) => { "ansible_loop_var": "item", "item": "lsr_assert", "lsr_assert": [ "tasks/assert_bond_options.yml" ] } ok: [managed_node1] => (item=lsr_assert_when) => { "ansible_loop_var": "item", "item": "lsr_assert_when", "lsr_assert_when": "VARIABLE IS NOT DEFINED!: 'lsr_assert_when' is undefined" } ok: [managed_node1] => (item=lsr_fail_debug) => { "ansible_loop_var": "item", "item": "lsr_fail_debug", "lsr_fail_debug": [ "__network_connections_result" ] } ok: [managed_node1] => (item=lsr_cleanup) => { "ansible_loop_var": "item", "item": "lsr_cleanup", "lsr_cleanup": [ "tasks/cleanup_bond_profile+device.yml", "tasks/remove_test_interfaces_with_dhcp.yml", "tasks/check_network_dns.yml" ] } TASK [Include the task 'show_interfaces.yml'] ********************************** Saturday 06 July 2024 06:46:08 -0400 (0:00:00.088) 0:00:41.428 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/show_interfaces.yml for managed_node1 TASK [Include the task 'get_current_interfaces.yml'] *************************** Saturday 06 July 2024 06:46:08 -0400 (0:00:00.080) 0:00:41.508 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/get_current_interfaces.yml for managed_node1 TASK [Gather current interface info] ******************************************* Saturday 06 July 2024 06:46:08 -0400 (0:00:00.119) 0:00:41.628 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "ls", "-1" ], "delta": "0:00:00.003575", "end": "2024-07-06 06:46:08.792456", "rc": 0, "start": "2024-07-06 06:46:08.788881" } STDOUT: bonding_masters eth0 lo rpltstbr team0 TASK [Set current_interfaces] ************************************************** Saturday 06 July 2024 06:46:08 -0400 (0:00:00.419) 0:00:42.048 ********* ok: [managed_node1] => { "ansible_facts": { "current_interfaces": [ "bonding_masters", "eth0", "lo", "rpltstbr", "team0" ] }, "changed": false } TASK [Show current_interfaces] ************************************************* Saturday 06 July 2024 06:46:08 -0400 (0:00:00.049) 0:00:42.098 ********* ok: [managed_node1] => {} MSG: current_interfaces: ['bonding_masters', 'eth0', 'lo', 'rpltstbr', 'team0'] TASK [Setup] ******************************************************************* Saturday 06 July 2024 06:46:08 -0400 (0:00:00.050) 0:00:42.148 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/create_test_interfaces_with_dhcp.yml for managed_node1 => (item=tasks/create_test_interfaces_with_dhcp.yml) included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/assert_dhcp_device_present.yml for managed_node1 => (item=tasks/assert_dhcp_device_present.yml) TASK [Install dnsmasq] ********************************************************* Saturday 06 July 2024 06:46:09 -0400 (0:00:00.101) 0:00:42.249 ********* ok: [managed_node1] => { "attempts": 1, "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [Install pgrep, sysctl] *************************************************** Saturday 06 July 2024 06:46:10 -0400 (0:00:01.780) 0:00:44.029 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "ansible_distribution_major_version is version('6', '<=')", "skip_reason": "Conditional result was False" } TASK [Install pgrep, sysctl] *************************************************** Saturday 06 July 2024 06:46:10 -0400 (0:00:00.049) 0:00:44.079 ********* ok: [managed_node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do TASK [Create test interfaces] ************************************************** Saturday 06 July 2024 06:46:12 -0400 (0:00:01.774) 0:00:45.854 ********* ok: [managed_node1] => { "changed": false, "cmd": "set -euxo pipefail\nexec 1>&2\nip link add test1 type veth peer name test1p\nip link add test2 type veth peer name test2p\nif [ -n \"$(pgrep NetworkManager)\" ];then\n nmcli d set test1 managed true\n nmcli d set test2 managed true\n # NetworkManager should not manage DHCP server ports\n nmcli d set test1p managed false\n nmcli d set test2p managed false\nfi\nip link set test1p up\nip link set test2p up\n\n# Create the 'testbr' - providing both 10.x ipv4 and 2620:52:0 ipv6 dhcp\nip link add name testbr type bridge forward_delay 0\nif [ -n \"$(pgrep NetworkManager)\" ];then\n # NetworkManager should not manage DHCP server ports\n nmcli d set testbr managed false\nfi\nip link set testbr up\ntimer=0\n# The while loop following is a workaround for the NM bug, which can be\n# tracked in https://bugzilla.redhat.com/show_bug.cgi?id=2079642\nwhile ! ip addr show testbr | grep -q 'inet [1-9]'\ndo\n let \"timer+=1\"\n if [ $timer -eq 30 ]; then\n echo ERROR - could not add testbr\n ip addr\n exit 1\n fi\n sleep 1\n rc=0\n ip addr add 192.0.2.1/24 dev testbr || rc=\"$?\"\n if [ \"$rc\" != 0 ]; then\n echo NOTICE - could not add testbr - error code \"$rc\"\n continue\n fi\n ip -6 addr add 2001:DB8::1/32 dev testbr || rc=\"$?\"\n if [ \"$rc\" != 0 ]; then\n echo NOTICE - could not add testbr - error code \"$rc\"\n continue\n fi\ndone\n\nif grep 'release 6' /etc/redhat-release; then\n # We need bridge-utils and radvd only in rhel6\n if ! rpm -q --quiet radvd; then yum -y install radvd; fi\n if ! rpm -q --quiet bridge-utils; then yum -y install bridge-utils; fi\n\n # We need to add iptables rule to allow dhcp request\n iptables -I INPUT -i testbr -p udp --dport 67:68 --sport 67:68 -j ACCEPT\n\n # Add test1, test2 peers into the testbr\n brctl addif testbr test1p\n brctl addif testbr test2p\n\n # in RHEL6 /run is not present\n mkdir -p /run\n\n # and dnsmasq does not support ipv6\n dnsmasq --pid-file=/run/dhcp_testbr.pid --dhcp-leasefile=/run/dhcp_testbr.lease --dhcp-range=192.0.2.1,192.0.2.254,240 --interface=testbr --bind-interfaces\n\n # start radvd for ipv6\n echo 'interface testbr {' > /etc/radvd.conf\n echo ' AdvSendAdvert on;' >> /etc/radvd.conf\n echo ' prefix 2001:DB8::/64 { ' >> /etc/radvd.conf\n echo ' AdvOnLink on; }; ' >> /etc/radvd.conf\n echo ' }; ' >> /etc/radvd.conf\n\n # enable ipv6 forwarding\n sysctl -w net.ipv6.conf.all.forwarding=1\n service radvd restart\n\nelse\n ip link set test1p master testbr\n ip link set test2p master testbr\n # Run joint DHCP4/DHCP6 server with RA enabled in veth namespace\n dnsmasq --pid-file=/run/dhcp_testbr.pid --dhcp-leasefile=/run/dhcp_testbr.lease --dhcp-range=192.0.2.1,192.0.2.254,240 --dhcp-range=2001:DB8::10,2001:DB8::1FF,slaac,64,240 --enable-ra --interface=testbr --bind-interfaces\nfi\n", "delta": "0:00:01.269228", "end": "2024-07-06 06:46:14.284769", "rc": 0, "start": "2024-07-06 06:46:13.015541" } STDERR: + exec + ip link add test1 type veth peer name test1p + ip link add test2 type veth peer name test2p ++ pgrep NetworkManager + '[' -n 59278 ']' + nmcli d set test1 managed true + nmcli d set test2 managed true + nmcli d set test1p managed false + nmcli d set test2p managed false + ip link set test1p up + ip link set test2p up + ip link add name testbr type bridge forward_delay 0 ++ pgrep NetworkManager + '[' -n 59278 ']' + nmcli d set testbr managed false + ip link set testbr up + timer=0 + ip addr show testbr + grep -q 'inet [1-9]' + let timer+=1 + '[' 1 -eq 30 ']' + sleep 1 + rc=0 + ip addr add 192.0.2.1/24 dev testbr + '[' 0 '!=' 0 ']' + ip -6 addr add 2001:DB8::1/32 dev testbr + '[' 0 '!=' 0 ']' + grep -q 'inet [1-9]' + ip addr show testbr + grep 'release 6' /etc/redhat-release + ip link set test1p master testbr + ip link set test2p master testbr + dnsmasq --pid-file=/run/dhcp_testbr.pid --dhcp-leasefile=/run/dhcp_testbr.lease --dhcp-range=192.0.2.1,192.0.2.254,240 --dhcp-range=2001:DB8::10,2001:DB8::1FF,slaac,64,240 --enable-ra --interface=testbr --bind-interfaces TASK [Include the task 'get_interface_stat.yml'] ******************************* Saturday 06 July 2024 06:46:14 -0400 (0:00:01.686) 0:00:47.540 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/get_interface_stat.yml for managed_node1 TASK [Get stat for interface test1] ******************************************** Saturday 06 July 2024 06:46:14 -0400 (0:00:00.080) 0:00:47.620 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1720262773.0208833, "block_size": 4096, "blocks": 0, "ctime": 1720262773.0208833, "dev": 23, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 46044, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": true, "isreg": false, "issock": false, "isuid": false, "lnk_source": "/sys/devices/virtual/net/test1", "lnk_target": "../../devices/virtual/net/test1", "mode": "0777", "mtime": 1720262773.0208833, "nlink": 1, "path": "/sys/class/net/test1", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 0, "uid": 0, "wgrp": true, "woth": true, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [Assert that the interface is present - 'test1'] ************************** Saturday 06 July 2024 06:46:14 -0400 (0:00:00.408) 0:00:48.029 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Include the task 'get_interface_stat.yml'] ******************************* Saturday 06 July 2024 06:46:14 -0400 (0:00:00.053) 0:00:48.083 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/get_interface_stat.yml for managed_node1 TASK [Get stat for interface test2] ******************************************** Saturday 06 July 2024 06:46:15 -0400 (0:00:00.123) 0:00:48.207 ********* ok: [managed_node1] => { "changed": false, "stat": { "atime": 1720262773.028284, "block_size": 4096, "blocks": 0, "ctime": 1720262773.028284, "dev": 23, "device_type": 0, "executable": true, "exists": true, "gid": 0, "gr_name": "root", "inode": 46450, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": true, "isreg": false, "issock": false, "isuid": false, "lnk_source": "/sys/devices/virtual/net/test2", "lnk_target": "../../devices/virtual/net/test2", "mode": "0777", "mtime": 1720262773.028284, "nlink": 1, "path": "/sys/class/net/test2", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 0, "uid": 0, "wgrp": true, "woth": true, "writeable": true, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } } TASK [Assert that the interface is present - 'test2'] ************************** Saturday 06 July 2024 06:46:15 -0400 (0:00:00.416) 0:00:48.623 ********* ok: [managed_node1] => { "changed": false } MSG: All assertions passed TASK [Test] ******************************************************************** Saturday 06 July 2024 06:46:15 -0400 (0:00:00.054) 0:00:48.678 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/create_bond_profile_reconfigure.yml for managed_node1 => (item=tasks/create_bond_profile_reconfigure.yml) TASK [fedora.linux_system_roles.network : Ensure ansible_facts used by role] *** Saturday 06 July 2024 06:46:15 -0400 (0:00:00.117) 0:00:48.795 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/roles/network/tasks/set_facts.yml for managed_node1 TASK [fedora.linux_system_roles.network : Ensure ansible_facts used by role are present] *** Saturday 06 July 2024 06:46:15 -0400 (0:00:00.093) 0:00:48.889 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Check if system is ostree] *********** Saturday 06 July 2024 06:46:15 -0400 (0:00:00.064) 0:00:48.954 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __network_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Set flag to indicate system is ostree] *** Saturday 06 July 2024 06:46:15 -0400 (0:00:00.051) 0:00:49.006 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __network_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Check which services are running] **** Saturday 06 July 2024 06:46:15 -0400 (0:00:00.050) 0:00:49.056 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Check which packages are installed] *** Saturday 06 July 2024 06:46:18 -0400 (0:00:02.279) 0:00:51.335 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Print network provider] ************** Saturday 06 July 2024 06:46:19 -0400 (0:00:00.994) 0:00:52.330 ********* ok: [managed_node1] => {} MSG: Using network provider: nm TASK [fedora.linux_system_roles.network : Abort applying the network state configuration if using the `network_state` variable with the initscripts provider] *** Saturday 06 July 2024 06:46:19 -0400 (0:00:00.051) 0:00:52.382 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Abort applying the network state configuration if the system version of the managed host is below 8] *** Saturday 06 July 2024 06:46:19 -0400 (0:00:00.095) 0:00:52.477 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Check if updates for network packages are available through the DNF package manager due to wireless or team interfaces] *** Saturday 06 July 2024 06:46:19 -0400 (0:00:00.051) 0:00:52.529 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__network_wireless_connections_defined or __network_team_connections_defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Check if updates for network packages are available through the YUM package manager due to wireless or team interfaces] *** Saturday 06 July 2024 06:46:19 -0400 (0:00:00.073) 0:00:52.602 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "ansible_distribution_major_version | int < 8", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Ask user's consent to restart NetworkManager due to wireless or team interfaces] *** Saturday 06 July 2024 06:46:19 -0400 (0:00:00.053) 0:00:52.656 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__network_wireless_connections_defined or __network_team_connections_defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Install packages] ******************** Saturday 06 July 2024 06:46:19 -0400 (0:00:00.075) 0:00:52.731 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not network_packages is subset(ansible_facts.packages.keys())", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Install NetworkManager and nmstate when using network_state variable] *** Saturday 06 July 2024 06:46:19 -0400 (0:00:00.126) 0:00:52.857 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Install python3-libnmstate when using network_state variable] *** Saturday 06 July 2024 06:46:19 -0400 (0:00:00.052) 0:00:52.910 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Restart NetworkManager due to wireless or team interfaces] *** Saturday 06 July 2024 06:46:19 -0400 (0:00:00.048) 0:00:52.958 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__network_wireless_connections_defined or __network_team_connections_defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Enable and start NetworkManager] ***** Saturday 06 July 2024 06:46:19 -0400 (0:00:00.073) 0:00:53.031 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Enable and start wpa_supplicant] ***** Saturday 06 July 2024 06:46:20 -0400 (0:00:00.663) 0:00:53.695 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__network_wpa_supplicant_required", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Enable network service] ************** Saturday 06 July 2024 06:46:20 -0400 (0:00:00.071) 0:00:53.767 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Ensure initscripts network file dependency is present] *** Saturday 06 July 2024 06:46:20 -0400 (0:00:00.047) 0:00:53.814 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_provider == \"initscripts\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Configure networking connection profiles] *** Saturday 06 July 2024 06:46:20 -0400 (0:00:00.044) 0:00:53.859 ********* changed: [managed_node1] => { "_invocation": { "module_args": { "__debug_flags": "", "__header": "#\n# Ansible managed\n#\n# system_role:network\n", "connections": [ { "bond": { "arp_interval": 60, "arp_ip_target": "192.0.2.128", "arp_validate": "none", "mode": "active-backup", "primary": "test1" }, "interface_name": "nm-bond", "ip": { "route_metric4": 65535 }, "name": "bond0", "state": "up", "type": "bond" }, { "controller": "bond0", "interface_name": "test1", "name": "bond0.0", "state": "up", "type": "ethernet" }, { "controller": "bond0", "interface_name": "test2", "name": "bond0.1", "state": "up", "type": "ethernet" } ], "force_state_change": false, "ignore_errors": false, "provider": "nm" } }, "changed": true } STDERR: [007] #0, state:up persistent_state:present, 'bond0': add connection bond0, 581c5862-092f-49ee-b41f-69aab8d7f4cf [008] #1, state:up persistent_state:present, 'bond0.0': add connection bond0.0, 8b26b8c6-880b-44fe-b856-81f88f7f3590 [009] #2, state:up persistent_state:present, 'bond0.1': add connection bond0.1, 888cbb45-7484-4abb-8851-519a27f56c87 [010] #0, state:up persistent_state:present, 'bond0': up connection bond0, 581c5862-092f-49ee-b41f-69aab8d7f4cf (is-modified) [011] #1, state:up persistent_state:present, 'bond0.0': up connection bond0.0, 8b26b8c6-880b-44fe-b856-81f88f7f3590 (not-active) [012] #2, state:up persistent_state:present, 'bond0.1': up connection bond0.1, 888cbb45-7484-4abb-8851-519a27f56c87 (not-active) TASK [fedora.linux_system_roles.network : Configure networking state] ********** Saturday 06 July 2024 06:46:21 -0400 (0:00:00.861) 0:00:54.720 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Show stderr messages for the network_connections] *** Saturday 06 July 2024 06:46:21 -0400 (0:00:00.052) 0:00:54.772 ********* ok: [managed_node1] => { "__network_connections_result.stderr_lines": [ "[007] #0, state:up persistent_state:present, 'bond0': add connection bond0, 581c5862-092f-49ee-b41f-69aab8d7f4cf", "[008] #1, state:up persistent_state:present, 'bond0.0': add connection bond0.0, 8b26b8c6-880b-44fe-b856-81f88f7f3590", "[009] #2, state:up persistent_state:present, 'bond0.1': add connection bond0.1, 888cbb45-7484-4abb-8851-519a27f56c87", "[010] #0, state:up persistent_state:present, 'bond0': up connection bond0, 581c5862-092f-49ee-b41f-69aab8d7f4cf (is-modified)", "[011] #1, state:up persistent_state:present, 'bond0.0': up connection bond0.0, 8b26b8c6-880b-44fe-b856-81f88f7f3590 (not-active)", "[012] #2, state:up persistent_state:present, 'bond0.1': up connection bond0.1, 888cbb45-7484-4abb-8851-519a27f56c87 (not-active)" ] } TASK [fedora.linux_system_roles.network : Show debug messages for the network_connections] *** Saturday 06 July 2024 06:46:21 -0400 (0:00:00.091) 0:00:54.864 ********* ok: [managed_node1] => { "__network_connections_result": { "_invocation": { "module_args": { "__debug_flags": "", "__header": "#\n# Ansible managed\n#\n# system_role:network\n", "connections": [ { "bond": { "arp_interval": 60, "arp_ip_target": "192.0.2.128", "arp_validate": "none", "mode": "active-backup", "primary": "test1" }, "interface_name": "nm-bond", "ip": { "route_metric4": 65535 }, "name": "bond0", "state": "up", "type": "bond" }, { "controller": "bond0", "interface_name": "test1", "name": "bond0.0", "state": "up", "type": "ethernet" }, { "controller": "bond0", "interface_name": "test2", "name": "bond0.1", "state": "up", "type": "ethernet" } ], "force_state_change": false, "ignore_errors": false, "provider": "nm" } }, "changed": true, "failed": false, "stderr": "[007] #0, state:up persistent_state:present, 'bond0': add connection bond0, 581c5862-092f-49ee-b41f-69aab8d7f4cf\n[008] #1, state:up persistent_state:present, 'bond0.0': add connection bond0.0, 8b26b8c6-880b-44fe-b856-81f88f7f3590\n[009] #2, state:up persistent_state:present, 'bond0.1': add connection bond0.1, 888cbb45-7484-4abb-8851-519a27f56c87\n[010] #0, state:up persistent_state:present, 'bond0': up connection bond0, 581c5862-092f-49ee-b41f-69aab8d7f4cf (is-modified)\n[011] #1, state:up persistent_state:present, 'bond0.0': up connection bond0.0, 8b26b8c6-880b-44fe-b856-81f88f7f3590 (not-active)\n[012] #2, state:up persistent_state:present, 'bond0.1': up connection bond0.1, 888cbb45-7484-4abb-8851-519a27f56c87 (not-active)\n", "stderr_lines": [ "[007] #0, state:up persistent_state:present, 'bond0': add connection bond0, 581c5862-092f-49ee-b41f-69aab8d7f4cf", "[008] #1, state:up persistent_state:present, 'bond0.0': add connection bond0.0, 8b26b8c6-880b-44fe-b856-81f88f7f3590", "[009] #2, state:up persistent_state:present, 'bond0.1': add connection bond0.1, 888cbb45-7484-4abb-8851-519a27f56c87", "[010] #0, state:up persistent_state:present, 'bond0': up connection bond0, 581c5862-092f-49ee-b41f-69aab8d7f4cf (is-modified)", "[011] #1, state:up persistent_state:present, 'bond0.0': up connection bond0.0, 8b26b8c6-880b-44fe-b856-81f88f7f3590 (not-active)", "[012] #2, state:up persistent_state:present, 'bond0.1': up connection bond0.1, 888cbb45-7484-4abb-8851-519a27f56c87 (not-active)" ] } } TASK [fedora.linux_system_roles.network : Show debug messages for the network_state] *** Saturday 06 July 2024 06:46:21 -0400 (0:00:00.055) 0:00:54.919 ********* skipping: [managed_node1] => { "false_condition": "network_state is defined" } TASK [fedora.linux_system_roles.network : Re-test connectivity] **************** Saturday 06 July 2024 06:46:21 -0400 (0:00:00.052) 0:00:54.971 ********* ok: [managed_node1] => { "changed": false, "ping": "pong" } TASK [Show result] ************************************************************* Saturday 06 July 2024 06:46:22 -0400 (0:00:00.441) 0:00:55.413 ********* ok: [managed_node1] => { "__network_connections_result": { "_invocation": { "module_args": { "__debug_flags": "", "__header": "#\n# Ansible managed\n#\n# system_role:network\n", "connections": [ { "bond": { "arp_interval": 60, "arp_ip_target": "192.0.2.128", "arp_validate": "none", "mode": "active-backup", "primary": "test1" }, "interface_name": "nm-bond", "ip": { "route_metric4": 65535 }, "name": "bond0", "state": "up", "type": "bond" }, { "controller": "bond0", "interface_name": "test1", "name": "bond0.0", "state": "up", "type": "ethernet" }, { "controller": "bond0", "interface_name": "test2", "name": "bond0.1", "state": "up", "type": "ethernet" } ], "force_state_change": false, "ignore_errors": false, "provider": "nm" } }, "changed": true, "failed": false, "stderr": "[007] #0, state:up persistent_state:present, 'bond0': add connection bond0, 581c5862-092f-49ee-b41f-69aab8d7f4cf\n[008] #1, state:up persistent_state:present, 'bond0.0': add connection bond0.0, 8b26b8c6-880b-44fe-b856-81f88f7f3590\n[009] #2, state:up persistent_state:present, 'bond0.1': add connection bond0.1, 888cbb45-7484-4abb-8851-519a27f56c87\n[010] #0, state:up persistent_state:present, 'bond0': up connection bond0, 581c5862-092f-49ee-b41f-69aab8d7f4cf (is-modified)\n[011] #1, state:up persistent_state:present, 'bond0.0': up connection bond0.0, 8b26b8c6-880b-44fe-b856-81f88f7f3590 (not-active)\n[012] #2, state:up persistent_state:present, 'bond0.1': up connection bond0.1, 888cbb45-7484-4abb-8851-519a27f56c87 (not-active)\n", "stderr_lines": [ "[007] #0, state:up persistent_state:present, 'bond0': add connection bond0, 581c5862-092f-49ee-b41f-69aab8d7f4cf", "[008] #1, state:up persistent_state:present, 'bond0.0': add connection bond0.0, 8b26b8c6-880b-44fe-b856-81f88f7f3590", "[009] #2, state:up persistent_state:present, 'bond0.1': add connection bond0.1, 888cbb45-7484-4abb-8851-519a27f56c87", "[010] #0, state:up persistent_state:present, 'bond0': up connection bond0, 581c5862-092f-49ee-b41f-69aab8d7f4cf (is-modified)", "[011] #1, state:up persistent_state:present, 'bond0.0': up connection bond0.0, 8b26b8c6-880b-44fe-b856-81f88f7f3590 (not-active)", "[012] #2, state:up persistent_state:present, 'bond0.1': up connection bond0.1, 888cbb45-7484-4abb-8851-519a27f56c87 (not-active)" ] } } TASK [Asserts] ***************************************************************** Saturday 06 July 2024 06:46:22 -0400 (0:00:00.050) 0:00:55.464 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/assert_bond_options.yml for managed_node1 => (item=tasks/assert_bond_options.yml) TASK [** TEST check bond settings] ********************************************* Saturday 06 July 2024 06:46:22 -0400 (0:00:00.106) 0:00:55.571 ********* ok: [managed_node1] => (item={'key': 'mode', 'value': 'active-backup'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/mode" ], "delta": "0:00:00.003060", "end": "2024-07-06 06:46:22.734814", "item": { "key": "mode", "value": "active-backup" }, "rc": 0, "start": "2024-07-06 06:46:22.731754" } STDOUT: active-backup 1 ok: [managed_node1] => (item={'key': 'arp_interval', 'value': '60'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/arp_interval" ], "delta": "0:00:00.003024", "end": "2024-07-06 06:46:23.104645", "item": { "key": "arp_interval", "value": "60" }, "rc": 0, "start": "2024-07-06 06:46:23.101621" } STDOUT: 60 ok: [managed_node1] => (item={'key': 'arp_ip_target', 'value': '192.0.2.128'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/arp_ip_target" ], "delta": "0:00:00.002957", "end": "2024-07-06 06:46:23.479695", "item": { "key": "arp_ip_target", "value": "192.0.2.128" }, "rc": 0, "start": "2024-07-06 06:46:23.476738" } STDOUT: 192.0.2.128 ok: [managed_node1] => (item={'key': 'arp_validate', 'value': 'none'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/arp_validate" ], "delta": "0:00:00.002986", "end": "2024-07-06 06:46:23.853064", "item": { "key": "arp_validate", "value": "none" }, "rc": 0, "start": "2024-07-06 06:46:23.850078" } STDOUT: none 0 ok: [managed_node1] => (item={'key': 'primary', 'value': 'test1'}) => { "ansible_loop_var": "item", "attempts": 1, "changed": false, "cmd": [ "cat", "/sys/class/net/nm-bond/bonding/primary" ], "delta": "0:00:00.003261", "end": "2024-07-06 06:46:24.225040", "item": { "key": "primary", "value": "test1" }, "rc": 0, "start": "2024-07-06 06:46:24.221779" } STDOUT: test1 TASK [Include the task 'assert_IPv4_present.yml'] ****************************** Saturday 06 July 2024 06:46:24 -0400 (0:00:01.914) 0:00:57.486 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/assert_IPv4_present.yml for managed_node1 TASK [** TEST check IPv4] ****************************************************** Saturday 06 July 2024 06:46:24 -0400 (0:00:00.078) 0:00:57.564 ********* FAILED - RETRYING: [managed_node1]: ** TEST check IPv4 (20 retries left). ok: [managed_node1] => { "attempts": 2, "changed": false, "cmd": [ "ip", "-4", "a", "s", "nm-bond" ], "delta": "0:00:00.003528", "end": "2024-07-06 06:46:27.118782", "rc": 0, "start": "2024-07-06 06:46:27.115254" } STDOUT: 73: nm-bond: mtu 1500 qdisc noqueue state UP group default qlen 1000 inet 192.0.2.91/24 brd 192.0.2.255 scope global dynamic noprefixroute nm-bond valid_lft 237sec preferred_lft 237sec TASK [Include the task 'assert_IPv6_present.yml'] ****************************** Saturday 06 July 2024 06:46:27 -0400 (0:00:02.812) 0:01:00.377 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/assert_IPv6_present.yml for managed_node1 TASK [** TEST check IPv6] ****************************************************** Saturday 06 July 2024 06:46:27 -0400 (0:00:00.079) 0:01:00.456 ********* ok: [managed_node1] => { "attempts": 1, "changed": false, "cmd": [ "ip", "-6", "a", "s", "nm-bond" ], "delta": "0:00:00.004632", "end": "2024-07-06 06:46:27.625277", "rc": 0, "start": "2024-07-06 06:46:27.620645" } STDOUT: 73: nm-bond: mtu 1500 qdisc noqueue state UP group default qlen 1000 inet6 2001:db8::1e2/128 scope global dynamic noprefixroute valid_lft 238sec preferred_lft 238sec inet6 2001:db8::3857:799d:ca94:20f7/64 scope global dynamic noprefixroute valid_lft 1797sec preferred_lft 1797sec inet6 fe80::c896:8415:b77a:67a4/64 scope link noprefixroute valid_lft forever preferred_lft forever TASK [Conditional asserts] ***************************************************** Saturday 06 July 2024 06:46:27 -0400 (0:00:00.429) 0:01:00.885 ********* skipping: [managed_node1] => { "changed": false, "skipped_reason": "No items in the list" } TASK [Success in test 'Given two DHCP-enabled network interfaces, when creating a bond profile with them, then the controller device and bond port profiles are present and the specified bond options are set for the controller device.'] *** Saturday 06 July 2024 06:46:27 -0400 (0:00:00.093) 0:01:00.979 ********* ok: [managed_node1] => {} MSG: +++++ Success in test 'Given two DHCP-enabled network interfaces, when creating a bond profile with them, then the controller device and bond port profiles are present and the specified bond options are set for the controller device.' +++++ TASK [Cleanup] ***************************************************************** Saturday 06 July 2024 06:46:27 -0400 (0:00:00.049) 0:01:01.028 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/cleanup_bond_profile+device.yml for managed_node1 => (item=tasks/cleanup_bond_profile+device.yml) included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/remove_test_interfaces_with_dhcp.yml for managed_node1 => (item=tasks/remove_test_interfaces_with_dhcp.yml) included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/tests/network/playbooks/tasks/check_network_dns.yml for managed_node1 => (item=tasks/check_network_dns.yml) TASK [fedora.linux_system_roles.network : Ensure ansible_facts used by role] *** Saturday 06 July 2024 06:46:28 -0400 (0:00:00.143) 0:01:01.172 ********* included: /var/ARTIFACTS/work-general_smz2p_w/plans/general/tree/tmp.YdqdIfIgog/ansible_collections/fedora/linux_system_roles/roles/network/tasks/set_facts.yml for managed_node1 TASK [fedora.linux_system_roles.network : Ensure ansible_facts used by role are present] *** Saturday 06 July 2024 06:46:28 -0400 (0:00:00.095) 0:01:01.268 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Check if system is ostree] *********** Saturday 06 July 2024 06:46:28 -0400 (0:00:00.062) 0:01:01.330 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __network_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Set flag to indicate system is ostree] *** Saturday 06 July 2024 06:46:28 -0400 (0:00:00.052) 0:01:01.383 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __network_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Check which services are running] **** Saturday 06 July 2024 06:46:28 -0400 (0:00:00.052) 0:01:01.436 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Check which packages are installed] *** Saturday 06 July 2024 06:46:30 -0400 (0:00:02.283) 0:01:03.719 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Print network provider] ************** Saturday 06 July 2024 06:46:31 -0400 (0:00:01.022) 0:01:04.742 ********* ok: [managed_node1] => {} MSG: Using network provider: nm TASK [fedora.linux_system_roles.network : Abort applying the network state configuration if using the `network_state` variable with the initscripts provider] *** Saturday 06 July 2024 06:46:31 -0400 (0:00:00.051) 0:01:04.793 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Abort applying the network state configuration if the system version of the managed host is below 8] *** Saturday 06 July 2024 06:46:31 -0400 (0:00:00.048) 0:01:04.842 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Check if updates for network packages are available through the DNF package manager due to wireless or team interfaces] *** Saturday 06 July 2024 06:46:31 -0400 (0:00:00.095) 0:01:04.938 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__network_wireless_connections_defined or __network_team_connections_defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Check if updates for network packages are available through the YUM package manager due to wireless or team interfaces] *** Saturday 06 July 2024 06:46:31 -0400 (0:00:00.070) 0:01:05.008 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "ansible_distribution_major_version | int < 8", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Ask user's consent to restart NetworkManager due to wireless or team interfaces] *** Saturday 06 July 2024 06:46:31 -0400 (0:00:00.054) 0:01:05.062 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__network_wireless_connections_defined or __network_team_connections_defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Install packages] ******************** Saturday 06 July 2024 06:46:31 -0400 (0:00:00.069) 0:01:05.132 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not network_packages is subset(ansible_facts.packages.keys())", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Install NetworkManager and nmstate when using network_state variable] *** Saturday 06 July 2024 06:46:32 -0400 (0:00:00.110) 0:01:05.242 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Install python3-libnmstate when using network_state variable] *** Saturday 06 July 2024 06:46:32 -0400 (0:00:00.052) 0:01:05.295 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Restart NetworkManager due to wireless or team interfaces] *** Saturday 06 July 2024 06:46:32 -0400 (0:00:00.052) 0:01:05.347 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__network_wireless_connections_defined or __network_team_connections_defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Enable and start NetworkManager] ***** Saturday 06 July 2024 06:46:32 -0400 (0:00:00.065) 0:01:05.413 ********* ok: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Enable and start wpa_supplicant] ***** Saturday 06 July 2024 06:46:32 -0400 (0:00:00.655) 0:01:06.068 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__network_wpa_supplicant_required", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Enable network service] ************** Saturday 06 July 2024 06:46:32 -0400 (0:00:00.066) 0:01:06.135 ********* skipping: [managed_node1] => { "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false } TASK [fedora.linux_system_roles.network : Ensure initscripts network file dependency is present] *** Saturday 06 July 2024 06:46:33 -0400 (0:00:00.048) 0:01:06.183 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_provider == \"initscripts\"", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Configure networking connection profiles] *** Saturday 06 July 2024 06:46:33 -0400 (0:00:00.046) 0:01:06.229 ********* changed: [managed_node1] => { "_invocation": { "module_args": { "__debug_flags": "", "__header": "#\n# Ansible managed\n#\n# system_role:network\n", "connections": [ { "name": "bond0.1", "persistent_state": "absent", "state": "down" }, { "name": "bond0.0", "persistent_state": "absent", "state": "down" }, { "name": "bond0", "persistent_state": "absent", "state": "down" } ], "force_state_change": false, "ignore_errors": false, "provider": "nm" } }, "changed": true } STDERR: TASK [fedora.linux_system_roles.network : Configure networking state] ********** Saturday 06 July 2024 06:46:34 -0400 (0:00:01.030) 0:01:07.260 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "network_state is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.network : Show stderr messages for the network_connections] *** Saturday 06 July 2024 06:46:34 -0400 (0:00:00.050) 0:01:07.310 ********* ok: [managed_node1] => { "__network_connections_result.stderr_lines": [ "" ] } TASK [fedora.linux_system_roles.network : Show debug messages for the network_connections] *** Saturday 06 July 2024 06:46:34 -0400 (0:00:00.050) 0:01:07.360 ********* ok: [managed_node1] => { "__network_connections_result": { "_invocation": { "module_args": { "__debug_flags": "", "__header": "#\n# Ansible managed\n#\n# system_role:network\n", "connections": [ { "name": "bond0.1", "persistent_state": "absent", "state": "down" }, { "name": "bond0.0", "persistent_state": "absent", "state": "down" }, { "name": "bond0", "persistent_state": "absent", "state": "down" } ], "force_state_change": false, "ignore_errors": false, "provider": "nm" } }, "changed": true, "failed": false, "stderr": "\n", "stderr_lines": [ "" ] } } TASK [fedora.linux_system_roles.network : Show debug messages for the network_state] *** Saturday 06 July 2024 06:46:34 -0400 (0:00:00.099) 0:01:07.459 ********* skipping: [managed_node1] => { "false_condition": "network_state is defined" } TASK [fedora.linux_system_roles.network : Re-test connectivity] **************** Saturday 06 July 2024 06:46:34 -0400 (0:00:00.052) 0:01:07.512 ********* ok: [managed_node1] => { "changed": false, "ping": "pong" } TASK [Delete the device 'nm-bond'] ********************************************* Saturday 06 July 2024 06:46:34 -0400 (0:00:00.431) 0:01:07.944 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "ip", "link", "del", "nm-bond" ], "delta": "0:00:00.007978", "end": "2024-07-06 06:46:35.105600", "failed_when_result": false, "rc": 1, "start": "2024-07-06 06:46:35.097622" } STDERR: Cannot find device "nm-bond" MSG: non-zero return code TASK [Remove test interfaces] ************************************************** Saturday 06 July 2024 06:46:35 -0400 (0:00:00.412) 0:01:08.356 ********* ok: [managed_node1] => { "changed": false, "cmd": "set -euxo pipefail\nexec 1>&2\nrc=0\nip link delete test1 || rc=\"$?\"\nif [ \"$rc\" != 0 ]; then\n echo ERROR - could not delete link test1 - error \"$rc\"\nfi\nip link delete test2 || rc=\"$?\"\nif [ \"$rc\" != 0 ]; then\n echo ERROR - could not delete link test2 - error \"$rc\"\nfi\nip link delete testbr || rc=\"$?\"\nif [ \"$rc\" != 0 ]; then\n echo ERROR - could not delete link testbr - error \"$rc\"\nfi\n", "delta": "0:00:00.037765", "end": "2024-07-06 06:46:35.552301", "rc": 0, "start": "2024-07-06 06:46:35.514536" } STDERR: + exec + rc=0 + ip link delete test1 + '[' 0 '!=' 0 ']' + ip link delete test2 + '[' 0 '!=' 0 ']' + ip link delete testbr + '[' 0 '!=' 0 ']' TASK [Stop dnsmasq/radvd services] ********************************************* Saturday 06 July 2024 06:46:35 -0400 (0:00:00.449) 0:01:08.806 ********* ok: [managed_node1] => { "changed": false, "cmd": "set -uxo pipefail\nexec 1>&2\npkill -F /run/dhcp_testbr.pid\nrm -rf /run/dhcp_testbr.pid\nrm -rf /run/dhcp_testbr.lease\nif grep 'release 6' /etc/redhat-release; then\n # Stop radvd server\n service radvd stop\n iptables -D INPUT -i testbr -p udp --dport 67:68 --sport 67:68 -j ACCEPT\nfi\n", "delta": "0:00:00.019058", "end": "2024-07-06 06:46:35.976955", "rc": 0, "start": "2024-07-06 06:46:35.957897" } STDERR: + exec + pkill -F /run/dhcp_testbr.pid + rm -rf /run/dhcp_testbr.pid + rm -rf /run/dhcp_testbr.lease + grep 'release 6' /etc/redhat-release TASK [Check routes and DNS] **************************************************** Saturday 06 July 2024 06:46:36 -0400 (0:00:00.428) 0:01:09.234 ********* ok: [managed_node1] => { "changed": false, "cmd": "set -euo pipefail\necho IP\nip a\necho IP ROUTE\nip route\necho IP -6 ROUTE\nip -6 route\necho RESOLV\nif [ -f /etc/resolv.conf ]; then\n cat /etc/resolv.conf\nelse\n echo NO /etc/resolv.conf\n ls -alrtF /etc/resolv.* || :\nfi\n", "delta": "0:00:00.008710", "end": "2024-07-06 06:46:36.395476", "rc": 0, "start": "2024-07-06 06:46:36.386766" } STDOUT: IP 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever 2: eth0: mtu 9001 qdisc mq state UP group default qlen 1000 link/ether 02:af:ee:21:a1:c9 brd ff:ff:ff:ff:ff:ff altname enX0 inet 10.31.45.166/22 brd 10.31.47.255 scope global dynamic noprefixroute eth0 valid_lft 3038sec preferred_lft 3038sec inet6 fe80::5c0d:af1:1b65:df2b/64 scope link noprefixroute valid_lft forever preferred_lft forever 12: team0: mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 16:76:1d:3d:33:47 brd ff:ff:ff:ff:ff:ff 48: rpltstbr: mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether fe:6b:59:63:7b:85 brd ff:ff:ff:ff:ff:ff inet 192.0.2.72/31 scope global noprefixroute rpltstbr valid_lft forever preferred_lft forever IP ROUTE default via 10.31.44.1 dev eth0 proto dhcp src 10.31.45.166 metric 100 10.31.44.0/22 dev eth0 proto kernel scope link src 10.31.45.166 metric 100 192.0.2.72/31 dev rpltstbr proto kernel scope link src 192.0.2.72 metric 425 linkdown IP -6 ROUTE fe80::/64 dev eth0 proto kernel metric 1024 pref medium RESOLV # This is /run/systemd/resolve/stub-resolv.conf managed by man:systemd-resolved(8). # Do not edit. # # This file might be symlinked as /etc/resolv.conf. If you're looking at # /etc/resolv.conf and seeing this text, you have followed the symlink. # # This is a dynamic resolv.conf file for connecting local clients to the # internal DNS stub resolver of systemd-resolved. This file lists all # configured search domains. # # Run "resolvectl status" to see details about the uplink DNS servers # currently in use. # # Third party programs should typically not access this file directly, but only # through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a # different way, replace this symlink by a static file or a different symlink. # # See man:systemd-resolved.service(8) for details about the supported modes of # operation for /etc/resolv.conf. nameserver 127.0.0.53 options edns0 trust-ad search us-east-1.aws.redhat.com TASK [Verify DNS and network connectivity] ************************************* Saturday 06 July 2024 06:46:36 -0400 (0:00:00.415) 0:01:09.649 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "ansible_facts[\"distribution\"] == \"CentOS\"", "skip_reason": "Conditional result was False" } PLAY RECAP ********************************************************************* managed_node1 : ok=147 changed=4 unreachable=0 failed=0 skipped=94 rescued=0 ignored=0 Saturday 06 July 2024 06:46:36 -0400 (0:00:00.144) 0:01:09.794 ********* =============================================================================== ** TEST check bond settings --------------------------------------------- 8.59s ** TEST check IPv4 ------------------------------------------------------ 2.81s fedora.linux_system_roles.network : Check which services are running ---- 2.37s fedora.linux_system_roles.network : Check which services are running ---- 2.36s fedora.linux_system_roles.network : Check which services are running ---- 2.28s fedora.linux_system_roles.network : Check which services are running ---- 2.28s ** TEST check bond settings --------------------------------------------- 1.91s Install dnsmasq --------------------------------------------------------- 1.90s Install dnsmasq --------------------------------------------------------- 1.78s Install pgrep, sysctl --------------------------------------------------- 1.77s Install pgrep, sysctl --------------------------------------------------- 1.74s Create test interfaces -------------------------------------------------- 1.69s Create test interfaces -------------------------------------------------- 1.66s Gathering Facts --------------------------------------------------------- 1.29s fedora.linux_system_roles.network : Check which packages are installed --- 1.27s fedora.linux_system_roles.network : Configure networking connection profiles --- 1.03s fedora.linux_system_roles.network : Configure networking connection profiles --- 1.03s fedora.linux_system_roles.network : Check which packages are installed --- 1.02s fedora.linux_system_roles.network : Check which packages are installed --- 1.02s fedora.linux_system_roles.network : Check which packages are installed --- 0.99s