Trying to Setup a Lab - Part I
This weekend I tried to completely rewrite how I deployed the elasticsearch cluster on my home lab. My first deployment was using a couple of shell scripts which updated system packages, installed docker among other tools, but the heart of the deployment was docker-compose.yml
embedded on the script itself.
That was a nice approach that helped me deploy an almost-production-ready cluster when I went back from holidays, but, as I’m learning Ansible, I felt compelled to convert that bash script into a playbook.
Then one word came to mind: K.I.S.S., so I opened Spotify and searched for them, and got ready to turn the things as complicated as I could.
Preparing the host for running an iSCSI Target
I also wanted to turn the docker hosts into swarm nodes, but in order to run a swarm, I needed some shared storage, so I set up a iSCSI target. I did it several times on my previous job using FreeBSD as the underlying OS, and I expected CentOS setup to be similar.
I was wrong, I didn’t find how to setup iSCSI target using config files, I only found instructions to set up using targetcli
, so my objective of having a 100% ansible setup was a failure.
I used some vars:
vars:
#The volume group name
vgname: targetsvg
#The physical device for volumes
vgdevice: /dev/sdb
#The volumen name format for the sequence
volnamefmt: "vol%02x"
Setting up the target service and installing the targetcli
package
- name: Start and enable target service
service:
name: target
state: started
enabled: yes
- name: Install targetcli package
package:
name: targetcli
state: latest
Then I created the logical volumes to share.
- name: create a volumegroup for luns
lvg:
vg: "{{ vgname }}"
pvs: "{{ vgdevice }}"
- name: create four luns
lvol:
lv: "{{ item }}"
vg: "{{ vgname }}"
size: "2040"
with_sequence: count=4 format={{ volnamefmt }}
notify: create filesystems
At my first try I had the pitfall of using relative volume sizes (25%FREE), but at each iteration, the available space was recalculated, so I got 2G, 1.5GB, 1.125GB and 840M volumes. Then I tried 25%VG as size, but it got rounded down to an extent boundary, so every time I ran the playbook, the task was fired again because of the size mismatch. Using a fixed size solved the problem.
As I wanted to format the volumes only when they were created, I fired a handler.
handlers:
- name: create filesystems
filesystem:
dev: /dev/mapper/{{ vgname }}-{{ item }}
fstype: xfs
with_sequence: count=4 format={{ volnamefmt }}
Ansible required the python-firewall module to manage firewalld
, so I installed the package prior to enable the iSCSI Target service on the local firewall.
- name: Ensure python-firewall is installed
package:
name: python-firewall
state: present
- name: Ensure firewalld is up and running
service:
name: firewalld
state: started
enabled: yes
- name: Allow iscsi-target connections
firewalld:
zone: public
service: iscsi-target
state: enabled
permanent: yes
immediate: yes
Once everything was in place, it was time to set up the iSCSI Target.
Setting up the iSCSI Target
As I said before, I haven’t found a way to set up an iSCSI Target using config files, so I followed Configure iSCSI Target & Initiator on CentOS 7 / RHEL7 by itzgeek adapting the commands a bit. I tried to run them not-interactively thinking about putting them on an script.
First, I created the backstores using a block device, the logical volumes.
[root@vb168 ~]# targetcli backstores/block create dev=/dev/mapper/targetsvg-vol01 name=target-vol01 Created block storage object target-vol01 using /dev/mapper/targetsvg-vol01. [root@vb168 ~]# targetcli backstores/block create dev=/dev/mapper/targetsvg-vol02 name=target-vol02 Created block storage object target-vol02 using /dev/mapper/targetsvg-vol02. [root@vb168 ~]# targetcli backstores/block create dev=/dev/mapper/targetsvg-vol03 name=target-vol03 Created block storage object target-vol03 using /dev/mapper/targetsvg-vol03. [root@vb168 ~]# targetcli backstores/block create dev=/dev/mapper/targetsvg-vol04 name=target-vol04 Created block storage object target-vol04 using /dev/mapper/targetsvg-vol04.
Then the target and a default portal, as this is an isolated environment, I kept the default portal which listened on all interfaces.
[root@vb168 ~]# targetcli iscsi/ create Created target iqn.2003-01.org.linux-iscsi.vb168.x8664:sn.4902c294c8ea. Created TPG 1. Global pref auto_add_default_portal=true Created default portal listening on all IPs (0.0.0.0), port 3260.
Once I had the portal, it was time to create some luns pointing to the backstores. I made this part interactively.
[root@vb168 ~]# targetcli iscsi/iqn.2003-01.org.linux-iscsi.vb168.x8664:sn.4902c294c8ea/tpg1/luns create /backstores/block/target-vol01 Created LUN 0. [root@vb168 ~]# targetcli iscsi/iqn.2003-01.org.linux-iscsi.vb168.x8664:sn.4902c294c8ea/tpg1/luns create /backstores/block/target-vol02 Created LUN 1. [root@vb168 ~]# targetcli iscsi/iqn.2003-01.org.linux-iscsi.vb168.x8664:sn.4902c294c8ea/tpg1/luns create /backstores/block/target-vol03 Created LUN 2. [root@vb168 ~]# targetcli iscsi/iqn.2003-01.org.linux-iscsi.vb168.x8664:sn.4902c294c8ea/tpg1/luns create /backstores/block/target-vol04 Created LUN 3.
An overview of how things were going
[root@vb168 ~]# targetcli iscsi/iqn.2003-01.org.linux-iscsi.vb168.x8664:sn.4902c294c8ea/tpg1 ls o- tpg1 .................................... [no-gen-acls, no-auth] o- acls ............................................... [ACLs: 0] o- luns ............................................... [LUNs: 4] | o- lun0 [block/target-vol01 (/dev/mapper/targetsvg-vol01) (default_tg_pt_gp)] | o- lun1 [block/target-vol02 (/dev/mapper/targetsvg-vol02) (default_tg_pt_gp)] | o- lun2 [block/target-vol03 (/dev/mapper/targetsvg-vol03) (default_tg_pt_gp)] | o- lun3 [block/target-vol04 (/dev/mapper/targetsvg-vol04) (default_tg_pt_gp)] o- portals ......................................... [Portals: 1] o- 0.0.0.0:3260 .......................................... [OK]
Create iSCSI ACLs
In order to access the target from the initiator, I had to set up an acl for each initiator node.
First enable authentication on the target portal group.
[root@vb168 ~]# targetcli /iscsi/iqn.2003-01.org.linux-iscsi.vb168.x8664:sn.4902c294c8ea/tpg1/acls/iqn.1994-05.com.redhat:a7fc7dabc1b6/tpg1 set attribute authentication=1 Parameter authentication is now '1'.
Then create the ACLs
[root@vb168 ~]# targetcli iscsi/iqn.2003-01.org.linux-iscsi.vb168.x8664:sn.4902c294c8ea/tpg1/acls create iqn.2021-03.local.garmo.vb166:vb166 Created Node ACL for iqn.2021-03.local.garmo.vb166:vb166 Created mapped LUN 3. Created mapped LUN 2. Created mapped LUN 1. Created mapped LUN 0. [root@vb168 ~]# targetcli iscsi/iqn.2003-01.org.linux-iscsi.vb168.x8664:sn.4902c294c8ea/tpg1/acls create iqn.2021-03.local.garmo.vb167:vb167 Created Node ACL for iqn.2021-03.local.garmo.vb167:vb167 Created mapped LUN 3. Created mapped LUN 2. Created mapped LUN 1. Created mapped LUN 0.
Then the userid and the password for earch ACL
[root@vb168 ~]# targetcli iscsi/iqn.2003-01.org.linux-iscsi.vb168.x8664:sn.4902c294c8ea/tpg1/acls/iqn.2021-03.local.garmo.vb166:vb166 set auth userid=swarmnode password=swarmpass Parameter password is now 'swarmpass'. Parameter userid is now 'swarmnode'. [root@vb168 ~]# targetcli iscsi/iqn.2003-01.org.linux-iscsi.vb168.x8664:sn.4902c294c8ea/tpg1/acls/iqn.2021-03.local.garmo.vb167:vb167 set auth userid=swarmnode password=swarmpass Parameter password is now 'swarmpass'. Parameter userid is now 'swarmnode'.
Remember to save the configuration, or it will be lost at next reboot. I forgot saving it, and I lost a lot of time figuring why it failed.
[root@vb168 ~]# targetcli saveconfig Configuration saved to /etc/target/saveconfig.json
Conclusion
It was a bit upset for not being able to find a way to setup the iSCSI target using ansible and a template file, but, at least I learned about the targetcli
tool and I practiced a bit with the lvm and filesystem related modules.
In my next post, I’ll cover the setup of the initiator nodes.
References
I consulted a lot of guides this time.
Configure iSCSI Target & Initiator on CentOS 7 / RHEL7 https://www.itzgeek.com/how-tos/linux/centos-how-tos/configure-iscsi-target-initiator-on-centos-7-rhel7.html
How Install and Configure iSCSI Storage server on CentOS 7 https://kifarunix.com/how-install-and-configure-iscsi-storage-server-on-centos-7/
Official linux-iscsi group’s page http://linux-iscsi.org/wiki/ISCSI
Ansible open_iscsi module https://docs.ansible.com/ansible/2.10/collections/community/general/open_iscsi_module.html#ansible-collections-community-general-open-iscsi-module
How to configure iSCSI target & initiator on RHEL/CentOS 7.6 https://www.linuxteck.com/how-to-configure-iscsi-target-initiator-on-rhel-centos-7-6/
Deleting iSCSI connection https://www.golinuxcloud.com/delete-remove-inactive-iscsi-target-rhel-7-linux/
Configure iSCSI Storage Server on CentOS 8 https://linuxhint.com/iscsi_storage_server_centos/
Complete playbook
storageserver.yml
---
- name: Install iscsi targetcli
hosts: storage
vars:
vgname: targetsvg
vgdevice: /dev/sdb
volnamefmt: "vol%02x"
tasks:
- name: Start and enable target service
service:
name: target
state: started
enabled: yes
- name: Install targetcli package
package:
name: targetcli
state: latest
- name: create a volumegroup for luns
lvg:
vg: "{{ vgname }}"
pvs: "{{ vgdevice }}"
- name: create four luns
lvol:
lv: "{{ item }}"
vg: "{{ vgname }}"
size: "2040"
with_sequence: count=4 format={{ volnamefmt }}
notify: create filesystems
- name: Ensure python-firewall is installed
package:
name: python-firewall
state: present
- name: Ensure firewalld is up and running
service:
name: firewalld
state: started
enabled: yes
- name: Allow iscsi-target connections
firewalld:
zone: public
service: iscsi-target
state: enabled
permanent: yes
immediate: yes
handlers:
- name: create filesystems
filesystem:
dev: /dev/mapper/{{ vgname }}-{{ item }}
fstype: xfs
with_sequence: count=4 format={{ volnamefmt }}