Someone stronger in networking than I can interject and not hurt my feelings here.
But my understanding is that LAG:LACP juts creates a round robin ish style of connections - based on Ip, Mac, hash, port, etc. At no point will two devices talking together use more than one link. So if you have a file server talking to a backup server and 4 links, 1 will be used. But if you have 20 clients connecting, you'll likely get 5 per link, providing up to 4gbit.
NFS doesn't do any sort of MPIO or round robin. The best you could do is have two IP's on both the host and NAS and mount two datastore - one one each IP (subnet). But you'd okay ever really get 2gbit at any time when both links are maxed out and nether datastore would ever see > 1gbit.
The question you really should be asking is if you care about throughout and why you're fighting to get it. You likely care more about IOPS and latency. Which doesn't need LAG/LACP.
VMware uses 4kb blocks so over a single 1gbe link you in theory can pull like 30,000 IOPS.
More to the point - that would be what the VM and host can get from the storage. If there's a file server on it, is it going no to have 4 links out to the clients? Otherwise the only benefit is internal disk to disk transfer, at best.
Anyone got any corrections? I'm sick and there's a lot of DayQuil going on here and a little screen
.