Что произойдет, если коммутатор последовательно теряет слишком много пакетов?

687
mohamed nur

Мне просто интересно, что произойдет, если коммутатор теряет слишком много пакетов. Я спрашиваю, потому что у меня есть переключатель, который постоянно перегружен. Это вызвано тем, что люди загружают в сеть тяжелые файлы. У меня есть резервный переключатель на случай, если этот не удастся.

иллюстрация

2
Что происходит с выключателем: ничего. Он просто сбрасывает пакеты. Что может случиться со связями, это что-то еще. Некоторые TCP-соединения могут использовать [алгоритм Naggle] (http://en.wikipedia.org/wiki/Nagle%27s_algorithm). У вас также может быть достаточно умный коммутатор, чтобы он пытался отправить исходное сообщение (в основном, это говорит отправителю замедлить или сделать небольшую паузу). Hennes 8 лет назад 0

1 ответ на вопрос

1
flungo

The short answer is, "not a lot". In terms of what actually happens on the switch, the packets are lost - and that is it. The switch doesn't care other than logging the fact they were lost.

You will find out exactly what causes packet loss in the following section and how the network as a whole is designed to deal with it below.

Again, in short responding to the packet loss is done by clients and not the switch. It is their responsibility and the only control the switch has over it is through the configuration of its queues and with features such as QoS which just prioritise traffic (they don't stop packet loss).

What is happening on the switch?

Switches use queues to allow a small amount of buffer between packets that come in and out of them. They are sort of like a cache and have many similar features. Typically, you want the packet to come in to the switch and go straight out. This is the fastest way, but this is not always possible.

If you have multiple clients converging on a connection that lacks the bandwidth or you are connecting through a narrower bandwidth to the destination, packets cannot be sent out as fast as they come into the device.

The additional packets will pile up on the queues used within the switch in the hope that it was a packet burst and that there will be less packets before the cache fills completely and it can 'catch up' with itself.

If the cache does become full then this means that any other packets that come in have no where to be stored and this is where packets are dropped - they are simply discarded until there is enough space on the queue.

Example

A real world example of this would be a File Server in an office. Say you had a 48x100Mbps connections and 2x 1Gbps connections (where the 48 are for clients and the 2 gigabit are bonded to the server). If there was no one else trying to communicate with the server then a client could happily utilise their full 100mbps connection (with overheads of course). But as soon as more than 20 want to access the file server that is 2gbit already and the 21st person wont be able to do that.

Packets will start getting added to the queue in a first come first serve basis and released using the prioritisation set up that is configured on the switch - typically, First In First Out (FIFO) when no configuration like QoS is set up. When this buffer then becomes full (too many packets) then the packets are completely dropped and it's down to clients to realise and do something about it.

The typical outcome would result in available_bandwidth / (number_of_clients * bandwidth_of_clients) as an average throughput. Where the rate each client gets is proportional to their connection speed to the switch. This isn't so much a deterministic design decision than it is a result of the probability of a packet coming from any one of the ports as soon as space on a queue is available.

What happens to the network traffic?

The software/hardware communicating should identify that packets are being dropped and implement rate limiting to reduce the amount of data trying to be sent. That said of course, if the data is UDP (typically in real time applications) there aren't any ACK messages so it won't know to do any rate limiting.

Rate limiting is generally how all traffic determines the fastest rate it can send at. For example, you might have a 1Gbit connection to a router in a simple network, but then only a 20Mbit upload speed and when you try sending more than that your packets will be dropped, the PC wont get an ACK and rate limiting is imposed that matches the max speed it can achieve throughout the entire network (internet included) to the destination.

In TCP the Nagle's algorithm is used to impose this rate limiting but in a UDP connection the outcome would probably be lowering quality (for things such as a video stream) or just failing, but the failure to receive enough data would have to explicitly come from the destination, rather than by the lack of an acknowledgement.

Это действительно не отвечает на вопрос «что произойдет, если коммутатор теряет слишком много пакетов», рассмотрим объяснение того, что происходит. Я не уверен, какое ограничение скорости имеет отношение к тому, что происходит, когда часть сетевого оборудования отбрасывает слишком много пакетов. Ramhound 8 лет назад 0
@ Ramhound, когда аппаратные средства отбрасывают пакеты, которые они только что отбросили. С ними ничего особенного не происходит, они просто полностью отбрасываются: клиент и транспортный протокол должны определить, что пакет никогда не был отправлен, и поэтому его необходимо отправить повторно. Помогает ли то, что я добавил к вопросу? Я попытаюсь разобраться в этом и, возможно, добавлю диаграмму, если это облегчит визуализацию? flungo 8 лет назад 0

Похожие вопросы