The FTL is more than just a LUT or a map, but as I wrote before, the job of the controller on the SSD (including the FTL) is the same as on other storage media controllers: turn media-specific things into blocks that are used for block storage. The OS uses a block-centric protocol to read and write, and the media controller translates blocks into whatever is actually happening on-disk (be it spinning rust, NAND chips with one or more levels, or something else like tape or optical media). On-disk, there are no blocks, that's just a logical concept. For NAND flash cells, an FTL would be used whole on NOR flash that same FTL wouldn't work due to the very different nature of NOR flash.
The FTL can be thought of as a dynamic LUT that changes based on inputs (many of which are unknown to consumers), this is a useful way to understand what the FTL is doing. This is how the FTL is thought of during raw NAND data recovery after a controller failure, the FTL is "frozen" into a static LUT and the raw NAND outputs are reconstructed based off of the FTL's LUT state.
The distinction I'm making between FTL and other storage medium's controllers is that the other storage medium controllers are more transparent and less "active", the exception to this being SED. FTL is truly a black box while other storage medium's controllers are mostly or more transparent.
The same would apply to a CHS map or SA+Block Map on a magnetic disk. But none of this matters to the OS, it just wants blocks, and all the parts of the system (IDE, SATA, SAS, NVMe on the interface, the drivers, the filesystems) assume that the drives and the OS can be in agreement on the state of the blocks. As soon as a controller starts MITM stuff in the middle, it needs to emulate those characteristics perfectly (which would only work on a 1-to-1 mapping), or you're essentially not creating reliable path between the blocks as the OS sees them and the blocks as the disk sees them.
Just as CHS maps don't belong in kernel space, neither should the integrity of the contents of the blocks.
We should have the hardware in place and organized in a way so that it tells truthful data to the OS. If we can't trust block data coming in, then why are we trusting the data in main memory? why trust CPU cache?
-this is my opinion obviously, some people will choose to use ZFS but it goes against system design principles that I favor for most storage needs I encounter. I subscribe to keeping systems and code as modular as possible and in small auditable chunks, I don't like systems that span many domains and have a large surface for bugs, similar to how we reduce our security attack surface, we should strive for system architectures that reduce bug surface.
Regarding controllers MITM'ing data the OS is trying to access. Both ZFS and hardware raid controllers are technically MITM'ing the data presented to applications. But the number of lines of code running to translate data to the application for hardware raid controller is significantly less than the number of lines of code used to translate data with ZFS.
In the context of Bit Rot: if you have to cross your fingers and hope your raid controller will deal with it, you're already in trouble. And that's why it is useful for the OS and future software improvements to be able to interact with and interrogate the drives directly. This also applies to security (looking at you, OPAL) and performance (fake sector sizes anyone?).
Hardware raid controllers perform scrubs to detect and fix bit rot, just as ZFS would have to for a given volume; if you have to cross your fingers for the hardware raid to scrub successfully you're going to need to sacrifice several goats to ensure ZFS scrubs successfully because ZFS's data is fragmented across the surface of the hdd while hardware raid's isn't.