Skip to content

Conversation

@rammanoj
Copy link
Contributor

@rammanoj rammanoj commented Nov 5, 2025

📝 Description

What does this PR do and why is this change necessary?

  • Add firewall_id to LNP

✔️ How to Test

What are the steps to reproduce the issue or verify the changes?

  • Run make install which installs the ansible collection.
  • Create an LKEE cluster through the UI.
  • Create 2 firewalls (firewall1 and firewall2).
  • Set the LINODE_API_TOKEN environment variable.
  • Use the below configuration to ensure that a NodePool is created with the existing firewall_id (run ansible-playbook playbook.yaml)
- name: Manage nodePool
  hosts: localhost
  vars:
    ansible_python_interpreter: `<point-to-python-env-where-collection-is-installed>`
  collections:
    - linode.cloud
  tasks:
    - name: Create a Linode LKE node pool with autoscaler
      linode.cloud.lke_node_pool:
        api_version: v4beta
        tags: ['hello-world']
        cluster_id: <cluster-id>
        count: 4
        type: g6-standard-2
        firewall_id: <firewall-1-id>
        state: present
      register: node_pool

    - name: Output node pool details
      debug:
        var: node_pool

  • Change firewall_id to id of firewall2 and re-run the playbook. Ensure that the firewall-id is updated both in UI the job went well.

@rammanoj rammanoj requested a review from a team as a code owner November 5, 2025 16:08
@rammanoj rammanoj requested review from lgarber-akamai and yec-akamai and removed request for a team November 5, 2025 16:08
Comment on lines 47 to 112
firewall_id: 123456
label: new-pool-label
labels:
foo.example.com/test: bar
foo.example.com/test2: foo
taints:
- key: foo.example.com/test2
value: test
effect: NoExecute
state: present
register: new_pool

- name: Assert node pool is added to cluster
assert:
that:
- new_pool.node_pool.count == 2
- new_pool.node_pool.firewall_id == 123456
- new_pool.node_pool.label == 'new-pool-label'
- new_pool.node_pool.type == 'g6-standard-1'
- new_pool.node_pool.nodes[0].status == 'ready'
- new_pool.node_pool.nodes[1].status == 'ready'
- new_pool.node_pool.labels['foo.example.com/test'] == 'bar'
- new_pool.node_pool.labels['foo.example.com/test2'] == 'foo'
- new_pool.node_pool.taints[0].key == 'foo.example.com/test2'
- new_pool.node_pool.taints[0].value == 'test'
- new_pool.node_pool.taints[0].effect == 'NoExecute'

- name: Attempt to update an invalid field on the node pool
linode.cloud.lke_node_pool:
cluster_id: '{{ create_cluster.cluster.id }}'

tags: [ 'my-pool' ]
type: g6-standard-2
count: 2
state: present
register: update_pool_fail
failed_when: '"failed to update" not in update_pool_fail.msg'

- name: Update the node pool
linode.cloud.lke_node_pool:
cluster_id: '{{ create_cluster.cluster.id }}'

firewall_id: 654321
tags: ['my-pool']
type: g6-standard-1
count: 1
skip_polling: true
label: updated-pool-label
autoscaler:
enabled: true
min: 1
max: 3
labels:
foo.example.com/update: updated
foo.example.com/test2: foo
taints:
- key: foo.example.com/update
value: updated
effect: PreferNoSchedule
state: present
register: update_pool

- name: Assert node pool is updated
assert:
that:
- update_pool.node_pool.count == 1
- update_pool.node_pool.firewall_id == 654321
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since these are integration tests that run against the API, I think the firewall ID should reflect that of an actual firewall.

Fortunately is an existing {{ firewall_id }} variable that we use to secure test resource:

For firewall_id update, we should be able to use the firewall module at the beginning of this test to create a temporary firewall for the sake of testing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. Thanks for the pointer!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants