Trying to find somebody that actually understands the offloading in a virtual environment. My company by default states they want all the offloading disabled for physical and virtual servers. I understand some settings being disabled to not worry about if the environment is setup properly to handle it and reduce the chances of having network burst issues and I understand not wanting to overload the NIC processor on a physical server. My concern is in a virtual environment if a server is trying to offload is it trying to offload the the physical NIC on the host or is it trying to offload to a virtual NIC where the processing would be picked up my the host processor. If it's picked up my the host processor I don't see any downside as if you already have all your processors allocated it's no different than the virtual server doing the work and if you don't have all your processors allocated then it's actually offloading the processing and freeing up that virtual server a bit. I haven't researched all of the offloading options only a handful so if I have any of that wrong I apologize and feel free to better educate me.
Thanks,
Bruce