You can in fact bond for 2gbps if you are on two different switches, in two completely different ways.
One way involves the use of cisco stacking switches, allowing you to use 802.3ad between two independent 'stacked' switches. You can also use the external PSU to provide redundant power to each switch (giving each switch redundant PSU's and having each switch redundant).
The second involves the use of the linux bonding driver in balance-rr configuration. This has a slight bug with the bridge driver in that it sometimes won't forward ARP packets, but if you're just using it as a web head or whatever, you don't really care about those.
The 'big boys' do use ibgp/etc. internally, but that's for a different reason: At large scale you can't buy a switch with a large enough MAC table (they run out of CAM), so you have routers at the top of your rack that then interlink. You can still connect your routers with redundant switches easily enough with vlans and such (think router on a stick).
Yes i was exactly thinking about stacking two independent switches (i've done it with Cisco 3750 but you can do it also with other brands).
The only problem could be related to the fact that doing this kind of stack you're now dealing with one "logical" system so if the firmware is bugged or someone issues the wrong command, you can have a single point of failure (but this could happen also if an HA system goes wrong by itself or because of you)
One way involves the use of cisco stacking switches, allowing you to use 802.3ad between two independent 'stacked' switches. You can also use the external PSU to provide redundant power to each switch (giving each switch redundant PSU's and having each switch redundant).
The second involves the use of the linux bonding driver in balance-rr configuration. This has a slight bug with the bridge driver in that it sometimes won't forward ARP packets, but if you're just using it as a web head or whatever, you don't really care about those.
The 'big boys' do use ibgp/etc. internally, but that's for a different reason: At large scale you can't buy a switch with a large enough MAC table (they run out of CAM), so you have routers at the top of your rack that then interlink. You can still connect your routers with redundant switches easily enough with vlans and such (think router on a stick).