Hubbert Smith reviews Fibre Channel network designs for no single point of failure.
- [Instructor] So, let's talk about SAN building blocks and putting those building blocks together. More often than not, we start with the preexisting Fibre Channel network. Here we see a SAN physical view. Starting at the bottom, we have an array, in the middle of the diagram, we have a SAN switch, at the upper part of the diagram we have application servers, and we have standard Ethernet interconnecting those for a management backplane. We also will dive into the SAN logical view, which is very different than the SAN physical view.
Instead of having a physical SAN array with physical disks, we have volumes and instead of having physical servers, we have virtual machines and Hyper-V hosts. So, at the end of the day, we have servers and workloads running on virtual servers and we have consolidated storage that allow us the SLA and consolidation advantages that we've previously reviewed. Let's start at the physical view with the objective of having no single point of failure. It's the most important thing that we can do in this section.
We start with application servers. These are often configured with failover so one server can backup another. They're often configured with dual channel host bus adapters so they can be redundantly cabled to redundant switches. Those redundant switches can go to a storage array. That storage can be replicated to a remote storage array for business continuity and disaster recovery. And we have caching that helps to speed up performance when used in conjunction with hard drives. The caching, by the way, is also redundant.
So, let's talk about how Fibre Channel gets installed and how it's used. Fibre Channel is for block IO, meaning it's used with multi-user applications, like databases or emails. Fibre Channel zoning is an important tool that keeps the unimportant traffic from competing with the important traffic. Fibre Channel storage services are a keystone of offering service level agreements for your business units that include data protection, snapshot data recovery, and business continuity.
In contrast, iSCSI is also a block IO protocol. It is certainly a less expensive version of block IO SANs. It is built for compatibility, not built for performance. It certainly has no deterministic performance like Fibre Channel, it's sort of a best try. My advice to you storage administrators is, iSCSI is great for branch offices and just be aware that you get what you pay for. Digging into the physical design, the various building blocks.
Here we see a picture of a Fibre Channel host bus adapter. This is simply a card that drops into application server and has dual ports, as you can see. Those dual ports connect to the Fibre Channel cable, which connects to a Fibre Channel switch. On each end of this Fibre Channel cable, there is a thing called an SFP, a small form factor pluggable. It is an optical-to-digital converter, it comes in various speeds. Early Fibre Channel started at one gigabit per second, it got faster to two gigabits per second.
Subsequent generations went to four gigabits per second, then to eight gigabits, then to 16 gigabits per second. And by the way, these speeds will be on the SNIA Exam. So, for a SAN physical design, our baseline is simply this. You have host bus adapters that connect to a switch that connect to a storage array. What we're missing here is the no single point of failure. Here's what that no single point of failure in the physical design looks like. You have a host bus adapter that has two ports, each of those ports connects to redundant switches, each of those redundant switches connects to redundant array controllers.
This is all in the interest of no single point of failure and designed for serviceability. In addition to being able to survive a cable failure or a switch failure, we can also survive switch upgrades or server upgrades or array upgrades. This design for serviceability is a keystone of storage administration. So, what's happening beneath the covers in the storage array? We mentioned no single point of failure, this applies all the way through the guts of this storage array. So, we've seen that storage arrays have two controllers.
Each of those controllers conducts RAID logic to protect against disk failure. Each of those storage controllers has a path to every drive, so the drives themselves, the hard drives, the SSDs, are dual port. SAS is an industry standard. It's used in hard drives, it's also used in controllers. It is the essence of this dual channel backplane that allows two controllers to connect to every drive and every drive to connect to two controllers. Therefore, if a controller goes down, or if a controller simply needs a firmware update, the other controller can take over the workload.
Also worth noting is controller-to-controller heartbeat happens across that dual channel backplane, so one controller can know if the other controller is down or slow and provide diagnostics and workload balancing. Very sophisticated, very well-engineered stuff and this is available through most storage array vendors. The key learning here is how we engineer for no single point of failure.
- Storage administration as a career path
- Fitting storage building blocks together
- Storage today and how we got here
- Consolidation and service level agreements
- Implementing backup and recovery SLA
- Capacity planning
- Spinning drives
- Hard drive interfaces
- Fibre Channel
- Solid state drives