I originally replaced the 8000 series card with the 9000 series card unaware that the firmware in the new card wasn't capable of reading the old array data, and under the advice that the new card was compatible. After the drives were moved, the new controller showed the drives as "unconverted DCB" (I believe).
It was suggested that I make a new RAID1 array of the drives, which I did. This failed to achieve the desired result. The new array had the data from the old system, but it was 1024 sectors into the "disk". I was now unable to boot the system on the new controller because the data was offset, and also unable to go back to the old controller, which would report success in setting up a new RAID1 array, but would always fall back to exporting two individual disks after exiting the configuration utility.
That's where we learned from a post to a mailing list (via Google) that we'd need a utility from 3Ware, and would have to contact their support technicians to get it. The requirement is documented in the 9000 series manual, but we hadn't seen it. The tool is not available on the vendor's site. It's also not capable of fixing the drives unless they're connected to a working 8000 series controller. If your controller is dead and you don't have a replacement: you're boned. The tool can't fix drives even if they're connected to a non-RAID disk array.
We worked with 3Ware to get the tool, but they left the office promptly at 4PM without providing us a solution. Since we don't have the luxury of office hours, I reasoned that there was a slow way to fix the problem. I copied the data from the RAID array to an external drive using the following Linux command:
# dd if=/dev/sdb of=/dev/sda bs=$((1024 * 512)) skip=1
"sdb" was the RAID1 array on the 9000 controller, and "sda" was the external drive. It wasn't necessary for the restoration, but it was a helpful validation that the solution that I had in mind would work, and a backup in case of failure. After copying the data, I was able to read the partition table and mount filesystems from "sda".
I felt comfortable with that solution, so I proceeded to move the data blocks on the RAID array:
# dd if=/dev/sdb bs=$((1024 * 512)) skip=1 | dd of=/dev/sdb
After rebooting, the drive appeared to be readable normally.
I'm not sure where the 9000 controller puts its metadata, nor am I sure how much data it stores. My guess is that it took a small block at the end of the disk, destroying whatever was there when the drives were connected to the 8000 series controller. The data at the end of the drive was a swap partition, so in our case we probably weren't hurt by the conversion. I don't really trust the process in general, but the risk seems low and it appears to be working properly.
This mess is exactly why I avoid hardware RAID when I can.