Главная страница


ru.unix.bsd

 
 - RU.UNIX.BSD ------------------------------------------------------------------
 From : Alexander Kolesnikoff                2:5020/400     18 Nov 2006  18:42:27
 To : Valentin Nechayev
 Subject : Re: MSI
 -------------------------------------------------------------------------------- 
 
 Valentin Nechayev <netch@segfault.kiev.ua> wrote:
 
 > 
 >>>> Slawa Olhovchenkov wrote: 
 > 
 > EG>> Что такое PCI Message Signalled Interrupts, поддержка которого
 > EG>> для гигабитных карт появилась в CURRENT?
 > 
 > SO> Я так догадываюсь -- что-то умнее базмастеринга. Разгружает CPU и
 > увеличивает SO> эфективность.
 > 
 > Криво догадываешься.:) Судя по описанию - метод транспорта запроса
 > на прерывание не через стандартный раутинг (где, например, 4 линии
 > на PCI разъём), а сообщением которое пишется в inbound memory APIC'а
 > и уже средствами APIC'а вызывает прерывание в процессоре.
 > 
 > Технология, конечно, извратная как почти всё у Intel, но оптимизацию
 > может дать, а особенно - избавление от проблемы с sharing
 > interrupts. И если ссылки не врут - можно вместе с запросом передать
 > ещё несколько байт информации сообщения, чтобы совсем уже ничего
 > лишнего не дёргать:)
 
   But several advantages should encourage use of MSI:
 
     * Avoid EOI write: One indirect function call is needed to generate an
       End Of Interrupt (EOI) write to the IO SAPIC because PCI uses level
       triggered Interrupts. This cost is hidden from device driver writers
       but is required to indicate when the OS thinks a level triggered
       interrupt has been serviced. If the IRQ line is still asserted when
       the EOI write reaches the IO SAPIC, another interrupt transaction will
       be generated. Though MMIO writes are posted, the IO bus bandwidth and
       some number of CPU cycles are consumed.
 
     * Exclusive Vector: The device driver can avoid an indirect function
       call for avoid both shared PCI IRQ line and shared CPU Vector. IO
       SAPIC implementations to date typically only have 7 IRQ lines - enough
       for several single function PCI devices. Several Multi-function PCI
       devices (eg 4-port 100BT) will result in shared IRQ lines. Shared CPU
       vector should only occur in very large systems under rare
       circumstances.
 
     * DMA ordering: Normally, the IRQ line bypasses the DMA data path. Thus
       race conditions exist where a DMA might not reach the cache coherency
       "domain" before the IRQ is delivered and acted upon. For PCs and the
       like this typically isn't a problem since the IO paths are short.
       Similarly, the HP ZX1 chip set places an IO SAPIC on each PCI Host bus
       adapter. This results in the interrupt transaction getting delivered
       after any previous DMA from the PCI bus.
 
       However, the IA64 architecture allows the IO SAPIC to be placed
 anywhere in the system topology. For larger systems, this can be a problem.
 When the interrupt is a transaction on the bus, PCI ordering rules prevent
 the MSI from bypassing any inbound DMA transaction. Thus, when the interrupt
 finally reaches the CPU, one can be certain all DMA has reached the cache
 coherency (eg memory) as well and not stuck in any coalescing buffers
 between the IO device and the destination memory. Thus one doesn't need any
 additional magic to guarantee the in-flight DMA is coherent with CPU caches.
 
     * Target multiple CPUs: This is wish list. Given the right services, a
       smart device can target transaction completions at different CPUs by
       generating interrupt transactions for specific Local SAPICs. One goal
       might be to service the interrupt on the same CPU that initiated the
       transaction. Tradeoffs between driver D-cache footprint and interrupt
       latency would help determine applications for this. High performance
       Clustering folks were looking at this but I've not heard of any
       prototype efforts.
 
   Что-же здесь извратного и про какие извраты Intel идёт речь?
 
  Alexander
 --- ifmail v.2.15dev5.3
  * Origin: UKU (2:5020/400)
 
 

Вернуться к списку тем, сортированных по: возрастание даты  уменьшение даты  тема  автор 

 Тема:    Автор:    Дата:  
 MSI   Slawa Olhovchenkov   17 Nov 2006 17:13:22 
 Re: MSI   Valentin Nechayev   18 Nov 2006 13:14:06 
 Re: MSI   Alexander Kolesnikoff   18 Nov 2006 18:42:27 
 MSI   Slawa Olhovchenkov   18 Nov 2006 22:08:08 
 Re: MSI   Valentin Davydov   20 Nov 2006 15:47:15 
Архивное /ru.unix.bsd/7525f7dc81f6.html, оценка 2 из 5, голосов 10
Яндекс.Метрика
Valid HTML 4.01 Transitional