Unless you are trying to experiment with various switches or techniques, I am a fan of keeping it simple - minimum # of switches needed to do the job and minimal places with a lot of configuration. The core pieces of my home/lab network are stable so I can keep everything else up and running 24x7 and then I do networking experiments with additional equipment attached to the network as needed. If I am playing with OpenStack for example, I'll set up dedicated switches and servers for that and then connect the test network into my main network with just a port or 2 between them to bring public IPs, storage and the required internal networks into the test environment.
In particular, I don't break things out into VLANs unless needed. I will break logical things out into different CIDR blocks freely and then if needed assign blocks to different VLANs based on security/filtering needs. If blocks are at the same site and there isn't any differences in filtering or access rules the blocks will end up on the same VLAN. I try and plan out my blocks from the various private address spaces so that they don't overlap and I can have static or dynamic VPN connections between sites without concern of IP addresses overlapping causing headaches. Things that are site-specific I'll reuse the same set of blocks again and again so when I see 172.31.250.x I know it is site specific and I need to be in that site to even see the network vs. 192.168.75.0/24 is stuff in Dallas or 192.168.77.0/24 is stuff in Chicago etc. I also avoid 192.168.(0,1,2,255).0/24 and 10.0.(0,1).0/24 as they are frequently used by other networks or cheap home routers for NAT.
Some examples of VLANs at the office / home networks I have broken out right now
- Public internet blocks (one vlan per upstream connection, or break out MPLS tags so they are on a VLAN tag).
- VOIP / Phone System / POE ports
- Surveillance/camera systems
- Untrusted Guest networks (Wireless & Wired)
- Untrusted IoT things like "smart" thermostats and other cruft (can access internet only, inside network can NATs into it).
- Isolated home network (ex: home can get to office but not office to home)
- VLAN(s) that can be accessed through the VPN between sites = production stuff at each site
- Multicast video VLANs
- Private VLANs per client as-needed
- iSCSI / SAN traffic
With the exception of multicast video lans where I need to watch multicast table sizes (I'm talking 500-2k groups, not your typical setup) and SAN traffic where I want my highest speed ports on the fastest switches, I'll connect every port on a server to a matching speed port on a switch in that rack (or bench) and just assign the VLANs later. Trying to have dedicated DMZ switches and Management switches is a pain to cable and maintain over time if doing top of rack switches. Much easier to just segregate by VLAN and make sure your ToR / Edge switches have enough bandwidth to your core or other ToR/edge switches. As servers roles change over time, just change the VLANs exposed to the hardware and no cable changes needed.
From the diagrams you posted, it still looks like you only have 2 servers with 10G ports. I would connect all those ports directly to the LB6M, and then connect the LB4's or ex3300's back to the LB6M with all the 10G ports needed so any port in your network could talk to the other at native speed. Hang your firewall and routing off of some of the 1G ports since you only have a few 100M of upstream bandwidth. You could end up needing only one LB6 and one or 2 of the 1G switches and save the rest for other experiments down the line.
That gives you lots of options - if you want to spin up a VM and try a new pfSense version or VPN package, just add that tag to a port on your vm box and expose it as a nic to the VM. Make the output of that test box a different tag that you can connect other VMs to or expose on a 1G port and connect a test laptop to for experiments. Lots of options open up when you can make changes in software and don't need to re-cable your environment just to experiment.