Trying to Setup a Lab - Part 2
Once I had a iSCSI storage server, I focused on storage clients, called initiators on iSCSI scope.
This time I had a several Ansible modules to deal with, including community.general.open_iscsi module. And, as I needed to include some sensitive information, I used ansible-vault create iscsi-secrets.yml
to create a encrypted file containing the variables I needed:
iscsi_target: 192.168.123.168
chapuserid: chapnode
chappassword: chappass
volumes: ["vol01", "vol02", "vol03", "vol04"]
The first task was to be sure I had what I needed.
- name: Install isci-initiator
package:
name: iscsi-initiator-utils
state: present
And start and enable the service:
- name: Setup iscsid service
service:
name: iscsid
state: started
enabled: yes
Then I configured the initiator name and created a iscsid.conf
file stripping out the comments from the original file and setting up the userid and password.
- name: Setup initiator name
copy:
content: "InitiatorName=iqn.2021-03.local.garmo.{{ ansible_hostname }}:{{ ansible_hostname }}"
dest: /etc/iscsi/initiatorname.iscsi
owner: root
group: root
mode: 0644
notify: restart iscsid
- name: Setup iscsid
template:
src: iscsid.conf.j2
dest: /etc/iscsi/iscsid.conf
owner: root
group: root
mode: 0600
notify: restart iscsid
Don’t worry about notifying twice the same handler, Ansible is smart enough and it will be called once.
The template is quite simple:
iscsid.startup = /bin/systemctl start iscsid.socket iscsiuio.socket
iscsid.safe_logout = Yes
node.startup = automatic
node.leading_login = No
node.session.auth.authmethod = CHAP
node.session.auth.username = {{ chapuserid }}
node.session.auth.password = {{ chappassword }}
node.session.timeo.replacement_timeout = 120
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 5
node.session.err_timeo.abort_timeout = 15
node.session.err_timeo.lu_reset_timeout = 30
node.session.err_timeo.tgt_reset_timeout = 30
node.session.initial_login_retry_max = 8
node.session.cmds_max = 128
node.session.queue_depth = 32
node.session.xmit_thread_priority = -20
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
node.conn[0].iscsi.MaxXmitDataSegmentLength = 0
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768
node.conn[0].iscsi.HeaderDigest = None
node.session.nr_sessions = 1
node.session.iscsi.FastAbort = Yes
node.session.scan = auto
I wanted to be sure the service was restarted before trying to contact the iSCSI Target, so I flushed the pending handlers.
- name: Flush handlers if initiator name or configuration changed
meta: flush_handlers
#the called handler
handlers:
- name: restart iscsid
service:
name: iscsid
state: restarted
Now, it’s time to contact the server, discover the portals and login:
- name: Perform a discovery on {{ iscsi_portal }} and show available target nodes
open_iscsi:
show_nodes: yes
discover: yes
ip: "{{ iscsi_portal }}"
port: "3260"
login: yes
node_auth: CHAP
node_user: "{{ chapuserid }}"
node_pass: "{{ chappassword }}"
auto_node_startup: yes
Four new block devices were detected on my client machine, but on an automated environment is difficult to detect which block devices were created, so I used labels when formatting my logical volumes and I was able of taking advantage of them, using the labels to mount the block devices.
- name: Mount dir
mount:
src: "LABEL={{ item }}"
path: /mnt/{{ item }}
fstype: xfs
state: mounted
opts: "defaults,_netdev"
loop: "{{ volumes }}"
At first I had problems when rebooting the client nodes, because of the mount points, they are network devices, but they are mounted as if they were local drives, and systemd
didn’t wait for the network to be up before trying to mount, so I had to tell they were network devices with the _netdev
mount option.
Disconnecting from target
During my tests I wanted to disconnect an initiator from the iSCSI Target, I used the following commands.
iscsiadm -m node --targetname iqn.2003-01.org.linux-iscsi.vb168.x8664:sn.4902c294c8ea --portal 192.168.123.168:3260 -u
Logging out of session [sid: 7, target: iqn.2003-01.org.linux-iscsi.vb168.x8664:sn.4902c294c8ea, portal: 192.168.123.168,3260]
Logout of [sid: 7, target: iqn.2003-01.org.linux-iscsi.vb168.x8664:sn.4902c294c8ea, portal: 192.168.123.168,3260] successful.
iscsiadm -m node --targetname iqn.2003-01.org.linux-iscsi.vb168.x8664:sn.4902c294c8ea --portal 192.168.123.168:3260 -o delete
Improving iSCSI Target configuration
During this lab I recreated several times all of the vms, so I thought on a way to improve the process. There were a lot of commands to be run on the server, and I didn’t want to use command
or shell
modules, as the always return a changed
state, and the script
module didn’t allow me to use templates and my variables. So I used the template module to create the script, and a handler to run only if it was created.
- name: Create setup file
template:
src: setup-targetcli.sh.j2
dest: /root/setup-targetcli.sh
owner: root
group: root
mode: 0750
tags: setupfile
notify: setup iscsi
handlers:
- name: setup iscsi
command: /root/setup-targetcli.sh
I reused the same variables file, and the templated script was:
#!/bin/bash
# Template to set up iscsi targets
{% set portal = "iqn.2021-03.local.garmo." + ansible_hostname + ":" + ansible_hostname %}
targetcli iscsi/ create "{{ portal }}"
{% for volume in volumes %}
targetcli backstores/block create dev=/dev/mapper/{{ vgname }}-{{ volume }} name={{ vgname }}-{{ volume }}
targetcli iscsi/{{ portal }}/tpg1/luns create /backstores/block/{{ vgname }}-{{ volume }}
{% endfor %}
targetcli /iscsi/{{ portal }}/tpg1 set attribute authentication=1
{% for initiator in initiators %}
{% set initwwn = "iqn.2021-03.local.garmo." + initiator + ":" + initiator %}
targetcli iscsi/{{ portal }}/tpg1/acls create {{ initwwn }}
targetcli iscsi/{{ portal }}/tpg1/acls/{{ initwwn }} set auth userid={{ chapuserid }} password={{ chappassword }}
{% endfor %}
targetcli saveconfig
Conclusion
I was able to create my lab environment on two steps than can be easily merged on one playbook.
---
- name: Setup storage server
import_playbook: storage-server.yml
- name: Setup workers
import_playbook: swarmworkers.yml
And the best of all: I enjoyed practicing with ansible. At my first attempt to run a playbook I was a little scared, but practicing every week I’m getting a better undertanding of it and, in turn it made me more confident about my skills. Now I try to use ansible every opportunity which pops up, and it is helping me a lot, on my learning and on my daily work.
The complete playbook
swarmworkers.yml
---
- name: Setup Swarm Worker
hosts: swarmworkers
vars_files: iscsi-secrets.yml
tasks:
- name: Install isci-initiator
package:
name: iscsi-initiator-utils
state: present
- name: Setup iscsid service
service:
name: iscsid
state: started
enabled: yes
- name: Setup initiator name
copy:
content: "InitiatorName=iqn.2021-03.local.garmo.{{ ansible_hostname }}:{{ ansible_hostname }}"
dest: /etc/iscsi/initiatorname.iscsi
owner: root
group: root
mode: 0644
notify: restart iscsid
- name: Setup iscsid
template:
src: iscsid.conf.j2
dest: /etc/iscsi/iscsid.conf
owner: root
group: root
mode: 0600
notify: restart iscsid
- name: Flush handlers if initiator name or configuration changed
meta: flush_handlers
- name: Perform a discovery on {{ iscsi_portal }} and show available target nodes
open_iscsi:
show_nodes: yes
discover: yes
ip: "{{ iscsi_portal }}"
port: "3260"
login: yes
node_auth: CHAP
node_user: "{{ chapuserid }}"
node_pass: "{{ chappassword }}"
auto_node_startup: yes
- name: Mount dir
mount:
src: "LABEL={{ item }}"
path: /mnt/{{ item }}
fstype: xfs
state: mounted
opts: "defaults,_netdev"
loop: "{{ volumes }}"
handlers:
- name: restart iscsid
service:
name: iscsid
state: restarted
storage-server.yml
---
- name: Setup storage server
hosts: storage
vars:
vgname: targetsvg
vgdevice: /dev/sdb
initiators: ["vb166", "vb167"]
vars_files: iscsi-secrets.yml
tasks:
- name: Install iscsi-target-utils and targetcli package
package:
name:
- targetcli
- scsi-target-utils
- targetd
state: latest
- name: Start and enable target service
service:
name: target
state: started
enabled: yes
- name: create a volumegroup for luns
lvg:
vg: "{{ vgname }}"
pvs: "{{ vgdevice }}"
- name: create four luns
lvol:
lv: "{{ item }}"
vg: "{{ vgname }}"
size: "2040"
loop: "{{ volumes }}"
notify: create filesystems
- name: Ensure python-firewall is installed
package:
name: python-firewall
state: present
- name: Ensure firewalld is up and running
service:
name: firewalld
state: started
enabled: yes
- name: Allow iscsi-target connections
firewalld:
zone: public
service: iscsi-target
state: enabled
permanent: yes
immediate: yes
- name: Create setup file
template:
src: setup-targetcli.sh.j2
dest: /root/setup-targetcli.sh
owner: root
group: root
mode: 0750
tags: setupfile
notify: setup iscsi
handlers:
- name: create filesystems
filesystem:
dev: /dev/mapper/{{ vgname }}-{{ item }}
fstype: xfs
opts: -L {{ item }}
loop: "{{ volumes }}"
- name: setup iscsi
command: /root/setup-targetcli.sh