Marvell SCSI & RAID Devices Driver Download



As native Non-volatile Memory Express (NVMe®) share-storagearrays continue enhancing our ability to store and access more informationfaster across a much bigger network, customers of all sizes – enterprise,mid-market and SMBs – confront a common question: what is required to takeadvantage of this quantum leap forward in speed and capacity?

Of course, NVMe technology itself is not new, and is commonly found in laptops, servers and enterprise storage arrays. NVMe provides an efficient command set that is specific to memory-based storage, provides increased performance that is designed to run over PCIe 3.0 or PCIe 4.0 bus architectures, and — offering 64,000 command queues with 64,000 commands per queue — can provide much more scalability than other storage protocols.

Oct 27, 2020 Fortunately, Marvell’s QLogic 16GFC/32GFC portfolio does support concurrent SCSI and NVMe, all with the same firmware and a single driver. This use of a single driver greatly reduces complexity compared to alternative solutions, which typically require two (one for FC running SCSI and another for FC-NVMe). The following resources are provided as a courtesy to our users. The products represented are no longer supported by QLogic Technical Services.

Marvell Scsi & Raid Devices Driver Download Windows 10

Unfortunately, most of the NVMe in use today is held captive inthe system in which it is installed. While there are a few storage vendors offeringNVMe arrays on the market today, the vast majority of enterprise datacenter andmid-market customers are still using traditional storage area networks, runningSCSI protocol over either Fibre Channel or Ethernet Storage Area Networks(SAN).

The newest storage networks, however, will be enabled by what wecall NVMe over Fabric (NVMe-oF) networks. As with SCSI today, NVMe-oF will offerusers a choice of transport protocols. Today, there are three standardprotocols that will likely make significant headway into the marketplace. Theseinclude:

  • NVMe over Fibre Channel (FC-NVMe)
  • NVMe over RoCE RDMA (NVMe/RoCE)
  • NVMe over TCP (NVMe/TCP)

If NVMe over Fabrics are to achieve their true potential, however,there are three major elements that need to align. First, users will need anNVMe-capable storage network infrastructure in place. Second, all of the majoroperating system (O/S) vendors will need to provide support for NVMe-oF. Third,customers will need disk array systems that feature native NVMe. Let’s look ateach of these in order.

  1. NVMe Storage Network Infrastructure

In addition to Marvell, several leading network and SANconnectivity vendors support one or more varieties of NVMe-oF infrastructuretoday. This storage network infrastructure (also called the storage fabric), ismade up of two main components: the host adapter that provides serverconnectivity to the storage fabric; and the switch infrastructure that providesall the traffic routing, monitoring and congestion management.

For FC-NVMe, today’s enhanced 16Gb Fibre Channel (FC) host busadapters (HBA) and 32Gb FC HBAs already support FC-NVMe. This includes theMarvell® QLogic® 2690 series Enhanced 16GFC, 2740 series 32GFC and 2770 SeriesEnhanced 32GFC HBAs.

On the Fibre Channel switch side, no significant changes are neededto transition from SCSI-based connectivity to NVMe technology, as the FC switchis agnostic about the payload data. The job of the FC switch is to just routeFC frames from point to point and deliver them in order, with the lowest latencyrequired. That means any 16GFC or greater FC switch is fully FC-NVMecompatible.

A key decision regarding FC-NVMe infrastructure, however, is whetheror not to support both legacy SCSI and next-generation NVMe protocols simultaneously.When customers eventually deploy new NVMe-based storage arrays (and many will overthe next three years), they are not going to simply discard their existingSCSI-based systems. In most cases, customers will want individual ports onindividual server HBAs that can communicate using both SCSI and NVMe,concurrently. Fortunately, Marvell’s QLogic 16GFC/32GFC portfolio does supportconcurrent SCSI and NVMe, all with the same firmware and a single driver. This useof a single driver greatly reduces complexity compared to alternative solutions,which typically require two (one for FC running SCSI and another for FC-NVMe).

If we look at Ethernet, which is the other popular transportprotocol for storage networks, there is one option for NVMe-oF connectivitytoday and a second option on the horizon. Currently, customers can already deployNVMe/RoCE infrastructure to support NVMe connectivity to shared storage. Thisrequires RoCE RDMA-enabled Ethernet adapters in the host, and Ethernetswitching that is configured to support a lossless Ethernet environment. Thereare a variety of 10/25/50/100GbE network adapters on the market today thatsupport RoCE RDMA, including the Marvell FastLinQ® 41000 Series and the 45000Series adapters.

On the switching side, most 10/25/100GbE switches that haveshipped in the past 2-3 years support data center bridging (DCB) and priorityflow control (PFC), and can support the lossless Ethernet environment needed tosupport a low-latency, high-performance NVMe/RoCE fabric.

While customers may have to reconfigure their networks to enablethese features and set up the lossless fabric, these features will likely besupported in any newer Ethernet switch or director. One point of caution: withlossless Ethernet networks, scalability is typically limited to only 1 or 2hops. For high scalability environments, consider alternative approaches to theNVMe storage fabric.

One such alternative is NVMe/TCP. This is a relatively newprotocol (NVM Express Group ratification in late 2018), and as such is notwidely available today. However, the advantage of NVMe/TCP is that it runs ontoday’s TCP stack, leveraging TCP’s congestion control mechanisms. That meansthere’s no need for a tuned environment (like that required with NVMe/RoCE),and NVMe/TCP can scale right along with your network. Think of NVMe/TCP in thesame way as you do iSCSI today. Like iSCSI, NVMe/TCP will provide goodperformance, work with existing infrastructure, and be highly scalable. Forthose customers seeking the best mix of performance and ease of implementation,NVMe/TCP will be the best bet.

Because there is limited operating system (O/S) support for NVMe/TCP (more on this below), I/O vendors are not currently shipping firmware and drivers that support NVMe/TCP. But a few, like Marvell, have adapters that, from a hardware standpoint, are NVMe/TCP-ready; all that will be required is a firmware update in the future to enable the functionality. Notably, Marvell will support NVMe over TCP with full hardware offload on its FastLinQ adapters in the future. This will enable our NVMe/TCP adapters to deliver high performance and low latency that rivals NVMe/RoCE implementations.

  • Operating System Support

While it’s great that there is already infrastructure to supportNVMe-oF implementations, that’s only the first part of the equation. Next comesO/S support. When it comes to support for NVMe-oF, the major O/S vendors areall in different places – see the table below for a current (August 2020) summary.The major Linux distributions from RHEL and SUSE support both FC-NVMe andNVMe/RoCE and have limited support for NVMe/TCP. VMware, beginning with ESXi7.0, supports both FC-NVMe and NVMe/RoCE but does not yet support NVMe/TCP.Microsoft Windows Server currently uses an SMB-direct network protocol and offersno support for any NVMe-oF technology today.

With VMware ESXi 7.0, be aware of a couple of caveats: VMware doesnot currently support FC-NVMe or NVMe/RoCE in vSAN or with vVolsimplementations. However, support for these configurations, along with supportfor NVMe/TCP, is expected in future releases.

  • Storage Array Support

A few storage array vendors have released mid-range and enterpriseclass storage arrays that are NVMe-native. NetApp sells arrays that supportboth NVMe/RoCE and FC-NVMe, and are available today. PureStorage offers NVMe arrays that support NVMe/RoCE, with plans to supportFC-NVMe and NVMe/TCP in the future. In late 2019, Dell EMC introduced its PowerMax line offlash storage that supports FC-NVMe. This year and next, other storage vendorswill be bringing arrays to market that will support both NVMe/RoCE and FC-NMVe.We expect storage arrays that support NVMe/TCP will become available in the sametime frame.

Future-proof your investments by anticipating NVMe-oF tomorrow

Altogether, we are not too far away from having all the elementsin place to make NVMe-oF a reality in the data center. If you expect theservers you are deploying today to operate for the next five years, there is nodoubt they will need to connect to NVMe-native storage during that time. Soplan ahead.

The key from an I/O and infrastructure perspective is to make sureyou are laying the groundwork today to be able to implement NVMe-oF tomorrow.Whether that’s Fibre Channel or Ethernet, customers should be deploying I/Otechnology that supports NVMe-oF today. Specifically, that means deploying 16GFCenhanced or 32GFC HBAs and switching infrastructure for Fibre Channel SANconnectivity. This includes the Marvell QLogic 2690, 2740 or 2770-series FibreChannel HBAs. For Ethernet, this includes Marvell’s FastLinQ 41000/45000 seriesEthernet adapter technology.

RAID

These advances represent a big leap forward and will deliver greatbenefits to customers. The sooner we build industry consensus around the leadingprotocols, the faster these benefits can be realized.

Marvell Scsi Drive

For more information on Marvell Fibre Channel and Ethernettechnology, go to www.marvell.com. Fortechnology specific to our OEM customer servers and storage, go to www.marvell.com/hpe or www.marvell.com/dell.