@tubs-ffm
The consensus seems to be growing on splitting the rooms into two (or more) subnets. IP (TCP/IP) is a very survivable network protocol *if* you follow its rules. I think you know this and have ruled out the split into subnets. I can only surmise that you have some use-case(s) requiring all primary use network connected devices to be in vlan 1. (broadcast and/or multicast traffic seems to me to be leading contender - but it doesn't really matter and you don't have to share that information.)
Your configuration conundrum has been an interesting problem to think about.
I *believe* a solution may be constructed. Its not elegant, honestly, its a bit of a hack and will require a higher level of effort to implement than just some network configuration (and that LoE may simply NOT BE WORTH IT) but that's for you to decide how much is too much effort.
Before going into that I do want to speak about the elephant in the room: DMZ hosts. From my perspective I have grave concerns about placing "DMZ" hosts deep in a network. Yes, inbound traffic from the Internet has to pass through your perimeter FW to ultimately reach your DMZ hosts so you have a point of control there. Philosophically though you want to protect yourself internally from DMZ hosts too in the (unlikely) event they are compromised. Typically the DMZ hosts are closely network-coupled with the FW infrastructure. In your case they are not and indeed a design requirement is for internal hosts in the right room to have 10Gbe access to the DMZ hosts. My thought here is to front end your DMZ subnet with a virtual opnsense FW guest implemented on Hyper-V. Your NAT will take place at our perimeter FW and your virtual FW will not use NAT and simply be routing your DMZ subnet with policies/ACL's applied. A point of note here: If you have been planning to (or ARE) using NAT reflection you will won't be able to do that (your requirement for direct 10Gbe access). IMO creating and troubleshooting ACL's/FW policies in opnsense is far and away easier to perform than directly implementing ACL's on a switch (though doing both is a good practice) - there are additional features that the FW provides which may be desirable DPI, log collection and reporting. This design change also creates a very closely coupled DMZ FW and DMZ host(s) infrastructure. There are configuration considerations and tasks within Hyper-V's virtual networks that will be required.
okay back to the task at hand:
Warning - this solution relies on asymmetric routing and honestly you really should not do this and services/applications may not behave rationally.
assumes static routing is in use.
Configure routing on the L3 switch in the right right room.
Create and configure a DMZ vlan interface on the L3 switch in the right room *or* if you are going to configure an opnsense DMZ instance then a transit VLAN between L3 switch in the right room *and* in the opnsense DMZ instance on Hyper-V. If creating a transit vlan you will then need a route on the L3 switch for the DMZ subnet via the transit vlan opnsense interface.
Remove the DMZ interface from your perimeter FW. Create a route on the perimeter FW with the DMZ subnet via the L3 router. Check your NAT rules and make sure they are configured correctly for this change.
Non DMZ Hosts in the right room which desire DMZ 10Gbe connectivity will need to be configured with the L3 switch as their default gateway. Internet originating or return traffic to these hosts will obviously bypass this path (there's that asymmetric business). Implement in either DHCP (reservation required if a scope for vlan 1 exists) or via static configuration on the vlan 1 hosts requiring 10gbe access.
DMZ hosts will need to be configured appropriately: If you choose to front end the DMZ with opensense - then that will be the default gateway on the DMZ hosts if not then the DMZ vlan interface on the L3 switch.
all that said if you don't have a functional requirement (broadcast/multicast or something else) for most of the hosts in both rooms to be in the same subnet then it would be so much more elegant to simply have a subnet in each room for "trusted access" L3 configured on both your switches, use transit vlan from your perimeter firewall to the switch in the left room, and frontend your dmz hosts with opnsense in the right room. You can still span your IOT vlans between the two switches *and* span your guest vlan from the perimeter FW across both switches...