Wazzz up guys!!
So I have a linux server, an OPEN Drive to be exact and a iMac Pro with a 40GigE to thunderbolt 3 adapter.
The Linux server is connected via 4x 40GigE lag (LACP) to an arista switch. And the iMac is connected to the switch via a single 40GigE link.
So I'm probably gonna turn on jumbo frames for this setup to allow maximum performance. The reason why we need so much bandwidth is because they want to edit video RAW 5k 2 streams per system.
I don't know what flow control settings to use, I'll try to ask the vendor OpenDrive and see what they recommend. But a lot of these smaller server companies don't really know the tweaks of their network settings that well.
Then of course they are use either NFS or SMB to allow the iMac Pro's to connect to the storage. So it's a single 40GigE link. I don't know which protocol to use. I think the storage vendor may default to NFS, but I want to have a backup plan in-case this setup doesn't provide enough bandwidth.
At my old job everything was Windows. And if I was using SMB 3.0 direct I could have more than one network connection to the switch and windows would then utilize the extra link, as long as it was in the same sub-net. So if I was doing a data copy between two windows machines, and let's say the windows server was a 40GigE to the switch and the client windows machine was 2x 1GigE to the same switch, and each interface has it's own IP address in the same subnet, then SMB would spread out the data transfer, via an explorer copy I could get 2GigE throughput.
The editing system we are using is going to be Black Magic Resolve, so I don't know if they have the same SMB technology to spread the data transfer out. It's my understanding that this SMB direct is a windows thing.
So I'm thinking, Since we are using a Linux Server and an iMac client via 40GigE links, what protocol would be best? The OpenDrive seems to be a really fast NAS, since it uses NFS and SMB as it's network protocol. You guys have any ideas? Also, any experience with jumbo and flow control?
So I have a linux server, an OPEN Drive to be exact and a iMac Pro with a 40GigE to thunderbolt 3 adapter.
The Linux server is connected via 4x 40GigE lag (LACP) to an arista switch. And the iMac is connected to the switch via a single 40GigE link.
So I'm probably gonna turn on jumbo frames for this setup to allow maximum performance. The reason why we need so much bandwidth is because they want to edit video RAW 5k 2 streams per system.
I don't know what flow control settings to use, I'll try to ask the vendor OpenDrive and see what they recommend. But a lot of these smaller server companies don't really know the tweaks of their network settings that well.
Then of course they are use either NFS or SMB to allow the iMac Pro's to connect to the storage. So it's a single 40GigE link. I don't know which protocol to use. I think the storage vendor may default to NFS, but I want to have a backup plan in-case this setup doesn't provide enough bandwidth.
At my old job everything was Windows. And if I was using SMB 3.0 direct I could have more than one network connection to the switch and windows would then utilize the extra link, as long as it was in the same sub-net. So if I was doing a data copy between two windows machines, and let's say the windows server was a 40GigE to the switch and the client windows machine was 2x 1GigE to the same switch, and each interface has it's own IP address in the same subnet, then SMB would spread out the data transfer, via an explorer copy I could get 2GigE throughput.
The editing system we are using is going to be Black Magic Resolve, so I don't know if they have the same SMB technology to spread the data transfer out. It's my understanding that this SMB direct is a windows thing.
So I'm thinking, Since we are using a Linux Server and an iMac client via 40GigE links, what protocol would be best? The OpenDrive seems to be a really fast NAS, since it uses NFS and SMB as it's network protocol. You guys have any ideas? Also, any experience with jumbo and flow control?