It’s fairly common in Linux hosts to have disk devices that are re-ordered during (re)boot, such that you cannot depend on a disk represented as /dev/sdc today, coming back as /dev/sdc following a reboot tomorrow.
Secure SHell (ssh) is a wonderful piece of technology, allowing us to connect [securely] to remote hosts across local and wide area networks.
As we’re connecting to an unknown host for the first time, the remote host will present its certificate, however the authenticity of this can’t be guaranteed, and we run the risk of a Man-In-The-Middle (MITM) attack. Fear not penguin chums, there are ways of verifying authenticity of the remote host before passing over credentials.
In this article, we looked at what Linux LVM is & how to utilise it to create logical (or virtual) disks that our server could use to persistently store data
How do we facilitate restores if/when we’re taking storage array-level snapshots of the LVMs constituent volumes? Fortunately, LVM not only has its own snapshot functionality, but also has an import feature that we can leverage when dealing with clones of the original physical volumes, and this article will walk through how to do just that.
OK, so we’ve managed to connect to an iSCSI target, or have created our FC zones so we can communicate with our storage platform (or both). Wonderful.
But wait, how do we actually get Linux to see the storage volumes that we create and present to our host? This article will show a couple of methods of identifying new/recently added storage resources on our Linux host (CentOS is being used in this example).
In this article, we walked through the process of creating new Linux Logical Volume Manager (LVM) resources – physical volumes, Volume Groups & Logical Volumes.
One of the real draws of using LVM to manage storage is that you can subsequently add physical & logical capacity as & when required. This article will demonstrate the simplicity of increasing existing – potentially live – resources on the fly, without downtime.
In essence, Linux LVM takes physical resources and allows us, as administrators, to create logical (hence the name) devices that are abstracted (or unpinned) from the underlying physical resources. This is similar in concept & implementation to storage virtualisation, which we won’t discuss here, but will do elsewhere.
Using LVM to manage storage is advantageous for a number of reasons, but namely it allows us to easily scale, adjust and add resilience to the storage devices we use as server administrators. This article describes the basic process of using LVM to configure & manage storage on a CentOS 6.x host.
When managing Storage Area Networks (SANs), it’s feasible that from time to time one may wish to attach & detach storage volumes (or disks) from hosts. This article describes the process to remove an iSCSI volume (disk) from CentOS Linux server without needing to disconnect from the target completely.
If we simply pull disks, host servers don’t generally react too well. So it’s important that we run through a couple of steps to ensure that the host stops trying to maintain a connection before we actually take the volume offline. It’s also important to remember that storage volumes should only be dismounted and disconnected from a host once all I/O has ceased to/from the disk.
Occasionally, we may not necessarily always keep up with the inner workings of operating systems and behaviour they exhibit (sometimes, for perfectly good reasons).
Take this as a case in point: We have a CentOS 6 host which, for some reason, seems to take quite a while to boot when compared to other similar servers we have in the estate. The machine is healthy and has the same base image, update & configuration baseline as our other hosts.
In Linux, the fdisk command doesn’t allow you to create a label to define a volume as a GPT, so we’re stuck with the size limitation of 2TB. Fortunately for us, Linux has something called parted (most likely an abbreviated version of ‘Partition Editor’). In this post, we’ll look at the steps required to create partitions & filesystems using parted.
The following article describes the steps required after connecting to a StorSimple iSCSI volume and configuring multipathing for the device (or vica versa), to actually use the storage. This includes adding a partition, creating a filesystem, mounting the filesystem, writing some test data and testing persistence following a reboot.
The purpose of this article is to complete the end-to-end view of the process. As this is only a baseline tutorial, please do add customisations when following the steps to make it relevant for your environment(s). Continue reading →