Good to see that the tech press fact check comments from companies as well as the political press fact check politicians. I'm reading this on a box with RAID1 SSD. It's had RAID1 SSD for some years.
Linux has supported RAID on SSD for years, in fact it supported it from the moment you could plug an SSD into a Linux PC.
Linux RAID is different from much of the Windows experience, for a mix of sound technical reasons and historical ones.
Back in the DOS days, CPU's were relatively slow, memory bandwidth and especially I/O bandwidth were limited and you only had one core.
A range of products grew up in that space which put something like an i960 and several RLL, SCSI, and later IDE controllers on a board along with a bit of RAM and did RAID and other cool things. In that environment they worked and were way faster than software RAID could have been. Every vendor did it differently, everyone had their own firmware and naturally enough moving those disks to another controller broke it all.
These boards got rapidly less effective as PCI bus solved the bandwidth problem and processors got faster and grew MMX and similar instructions. By the time we reached the Pentium II they were basically a joke except for some of the very high end boards, and even those were dubious compared to just going more SMP. Moore's law won again.
The RAID vendors responded in two ways - some of them went upmarket to do fancy high end controllers for big enterprise, but pursued by faster processors, faster busses and ever more cores they all merged into one or two companies and mostly went away.
The others started doing software RAID, adding BIOS support to their cards for 'RAID' boot up and moving their firmware into software drivers you installed on the PC. This was often portrayed as a cheap hack ('fakeraid'), but the benchmarks usually showed otherwise.
In order to keep their revenue stream even though they were often using the same chip or even boards as their bottom end dirt cheap IDE controllers they locked the firmware to particular PCI identifiers. In some cases people even used to solder jumper wires on the IDE cards to use a Windows RAID driver with them rather than buy the expensive card with the jumper set.
Today it's much the same except that most of the PC vendors bundle their software RAID products as free value add but still tied to their product.
The Linux RAID history is different because unlike Microsoft the decision was made to integrate software RAID properly with the OS.
The Linux RAID (md) drivers don't care what you are RAIDing providing it looks like a disk. It may not make sense but you can RAID floppies, even ramdisks. Actually RAID floppies were useful in the early days - it was the one cheap hot-pluggable media everyone had for testing 8)
Because RAID is simply part of the OS core you can build RAID volumes that are spanning two vendors controllers, or on low end controllers despite the vendors best effort to lock you out. It's also bus agnostic as anyone who has ever rescued RAID volumes removed from a server and stuffed in a USB caddy will appreciate.
When volume management got added to Linux (the dm driver) it was also added in an abstracted way and knows how to divide disks up and present slices of them in all sorts of orders. This allowed the Linux raid drivers to be used to manage the same interfaces vendors used in their proprietary windows products. Some vendors contributed to that support, others got reverse engineered - and it turned out easier than expected because a lot of them seem to use the same layout with just a few numbers changed - maybe they all licensed the same firmware. This is what the dmraid tool does.
Over time the RAID layers in the kernel also grew a wide range of other features such as being able to use a small fast device (eg a battery backed RAM or a fast SSD) to front a slower device, to stack with encryption, even to emulate failing devices for debugging.
There is a whole load of magic to find volumes, assemble them and also to make installation work. That's a big piece of work the distributions did that while invisible shouldn't be forgotten.
Today lots of PC hardware still has this legacy of strange rival drivers and the fact Windows didn't integrate RAID support at the time. The standard AHCI controllers can often be given multiple PCI identifiers so the right 'vendor' driver can be loaded in Windows. This is why there is a big list of identifiers in the AHCI driver beyond the class driver and why we keep having to add them as we find new ones. I suspect NVMe will go the same way.
Some Windows oriented vendors don't like the fact we do things the right way and still push their own stuff
http://www.gossamer-threads.com/lists/linux/kernel/2352338
but from a technical perspective, and an architectural one what Linux does (and what in fact most non PC systems do) is the right thing for the user.
If you write a driver for a disk or a new kind of transport appears the chances are you've already got RAID support in Linux, because it's designed right.
If you need to move a disk between machines or controllers it just works, because it's designed right.
I never found out why Microsoft didn't integrate RAID in Windows when it would have made sense to do so. Possibly not wanting to tread on their partners toes, perhaps worries about anti-trust given the time period this was happening.
+Hans Franke Nowdays you might as well use USB keys. The smaller sizes are pretty much give away prices.
But naturally use RAID controllers and nowadays more and more no-RAID.
I RAID 1 my SSD's not because I want more performance but because SSDs have a tendancy to fail suddenly and silently.
Lenovo SWRAID driver is a closed source driver at this time, I am afraid it's not possible to submit it to kernel.
Writing this to the kernel ML is either extremely naive or just a joke, wonderful :)
A bunch of real floppies gives of not only a nicer hardware stack, and a true feeling of action, but also the chance for a non tweeked. Imagine loading a 1 MiB picture from a single floppy in comparsion to the same image loaded from like 8 or 12 drives. This will be a real lesson without simulating the result. Only a minority can learn the same way form some tables presenting numbers on the screen. With such a real world example we can get the idea over to a wider audience.
I still have the arrays of small hard drives I lashed together to test md code for Neil Brown. I picked up a bunch of old SCSI drives at a hamfest and hadn't found a use for them--until the md driver came along.
Linux was always been pretty good at supporting two floppy controllers if you had a second one on a multi-function card that could be jumpered to a new address.
But I agree, floppy drives would be way cooler.
But people telling floppies are slow have not met early MFM drives like my 15 MB 5.25" FH which actually was slower than newer floppy drives...
Ofc none of that when using the BIOS supplied drivers :))
Because, of course, fdformat only works on real floppy drives. You need something that speaks the appropriate mass storage protocol extension to format a disk on a USB floppy drive.
Also, you don't get to do fun things with the low level format. They support a small set of standard PC formats, and that's your lot --- e.g. no Amiga or Acorn formats.
Mine supports, for example, only 1440kB, 1200kB, and a bizarre 1232kB one with 1024 byte sectors.
Hard to imagine why Lenovo did did, and why they don't fix it. Maybe they put something in the image they don't want users removing. Like say a second rootkit.
I know this is Linux's Alan Cox but the post reads like a typical Windows hater Linux desktoper and is quite amusing and even quaint if you have just passing familiarity with ZFS internals to contrast.
As far as the layered approach I would also suggest study of FreeBSD's geom which dips slightly below and above Linux's mdraid (i.e. you still use geom with ZFS, zvols) but IMHO is a bit cleaner probably because it's a later arrival.