An Ethernet-Accessible Control Infrastructure for Rapid FPGA Development Andrew Heckerling, Thomas Anderson, Huy Nguyen, Greg Price, Sara Siegal, John Thomas High Performance Embedded Computing Workshop 24 September 2008 This work is sponsored by the Department of the Air Force, under Air Force Contract FA8721-05-C-0002. Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the United States Air Force. MIT Lincoln Laboratory HPEC2008-1 AJH-HTN 09/24/08
Outline • Introduction and Motivation • Container Infrastructure – Concept – Implementation • Example Application • Summary MIT Lincoln Laboratory HPEC2008-2 AJH-HTN 09/24/08
Rapid Advanced Processor In Development (RAPID) RAPID Tiles and IP Library Control IO Known Good Capture Form Factor Selection Sig. Proc. Designs Custom Composable VME / VPX Processor MicroTCA Board Design FPGA FPGA Regs Regs Ports Ports Function Function FPGA COTS Boards Core Core Memory Memory Container Interface Interface Infrastructure Control Control Bus Bus Container Container Main features of RAPID: • Composable processor board – Custom processor composed from tiles extracted from known-good boards Form factor highly flexible – Tiles accompanied with verified firmware / software for host computer interface • Co-design of boards and IPs – Use portable FPGA Container Infrastructure to develop functional IPs Container has on-chip control infrastructure, off-chip memory access, and host computer interface – Surrogate board can be used while target board(s) being designed (custom) or purchased (COTS) MIT Lincoln Laboratory HPEC2008-3 AJH-HTN 09/24/08
Motivation Goal : Airborne Radar System Demo Quick Development: 7-12 Months Receiver Array 4 Channels 20 MHz BW ROSA II Back-end RAPID System Processor Front-end Computer Signal Processor Reduce system development time in half MIT Lincoln Laboratory HPEC2008-4 AJH-HTN 09/24/08
Outline • Introduction and Motivation • Container Infrastructure – Concept – Implementation • Example Application • Summary MIT Lincoln Laboratory HPEC2008-5 AJH-HTN 09/24/08
FPGA Processing Application Control • FPGA processes high- FPGA FPGA FPGA FPGA Processor speed streaming data from various sources • Control processor ADC DAC initializes, monitors, and configures FPGA Data Data FPGA FPGA Processor Processor Container Infrastructure • Control Processor Controller Core gains visibility into FPGA via Controller Core • Controller Core Status Block Block Block provides monitoring and control of 1 2 3 memories and functional blocks – Set parameters, load memory, Off-Chip Memory Off-Chip Memory monitor status MIT Lincoln Laboratory HPEC2008-6 AJH-HTN 09/24/08
FPGA Container Infrastructure • FPGA Function Core development can be accelerated with infrastructure provided by Container: host computer interface, on- chip registers, ports, and memory interface • Real-time application or debug utility can access any address (registers, ports, and memories) on the FPGA • Message formatting and data transfer operations are supported through Remote Direct Memory Access (RDMA) library Computer FPGA Board FPGA Regs Real-time Debug Application Utility Ports Function Core RAM Interface C++ interface GigE Controller RDMA Library Bus Container MIT Lincoln Laboratory HPEC2008-7 AJH-HTN 09/24/08
Outline • Introduction and Motivation • Container Infrastructure – Concept – Implementation • Example Application • Summary MIT Lincoln Laboratory HPEC2008-8 AJH-HTN 09/24/08
Motivation for Memory-Mapped Control Graphics Graphics Device Device Address Interconnect Data Ethernet Ethernet General Device Device Processor e.g. Processor Bus, FPGA FPGA PCI, PCI-Express, SRIO • Memory-Mapped Control Means – Device control via simple reads/writes to specific addresses – Processor and interconnect not specific to any device With proper software, processor can control any device • Container Infrastructure extends concept to FPGA control MIT Lincoln Laboratory HPEC2008-9 AJH-HTN 09/24/08
Interconnect • Interconnect Choices – Ethernet, Serial RapidIO, PCI Express, etc. • Platform-Specific Considerations – MicroTCA has Gigabit Ethernet channel to all payload slots, separate from higher-speed data channels FPGA Boards MicroTCA Chassis H “Fat Pipe” data U channel B Gigabit Ethernet Control Processor • Advantages of using Gigabit Ethernet – Ubiquitous – Wide industry support – Easy to program MIT Lincoln Laboratory HPEC2008-10 AJH-HTN 09/24/08
Memory-Mapped Control Protocol • Stateless “request/response” protocol • Reads and writes an address range (for accessing memory) or a constant address (for accessing FIFO ports) • Presently implemented on top of UDP and Ethernet Message Format Command Purpose 0 magic version 4 node number READ Request read data 8 command WRITE Request write data 12 address 16 length DATA Response to READ 20 flags 24 message tracking id ACK Response to WRITE 28 … data (optional) NACK Response to READ/WRITE (command failed) MIT Lincoln Laboratory HPEC2008-11 AJH-HTN 09/24/08
Memory-Mapped Control on FPGA Example FPGA Address Space 0x0 Off-chip SDRAM Read 0x1000 … Write Control 0x2000 Mode Processor 0x2004 Status 0x2008 Temperature … • Each device or core has an address within the FPGA • Control processor refers to these addresses when reading from or writing to the FPGA MIT Lincoln Laboratory HPEC2008-12 AJH-HTN 09/24/08
Real-Time Application Example • Real-Time Application uses simple C++ methods to communicate with FPGA • C++ interface portable to other interconnects (SRIO, PCIe) // Create an FPGA access object // Create an FPGA access object FpgaUdpReadWrite fpga(“fpga-network-address”, FPGA_UDP_PORT); FpgaUdpReadWrite fpga(“fpga-network-address”, FPGA_UDP_PORT); // Send input data from myBuffer to the FPGA // Send input data from myBuffer to the FPGA fpga->write(FPGA_INPUT_DATA_ADDR, INPUT_DATA_LENGTH, myBuffer); fpga->write(FPGA_INPUT_DATA_ADDR, INPUT_DATA_LENGTH, myBuffer); // Read back the output data // Read back the output data fpga->read(FPGA_OUTPUT_DATA_DDR, OUTPUT_DATA_LENGTH, myBuffer); fpga->read(FPGA_OUTPUT_DATA_DDR, OUTPUT_DATA_LENGTH, myBuffer); Computer FPGA Board FPGA Regs Real-time Debug Application Utility Ports Function Core RAM Interface Controller RDMA Library Bus Container MIT Lincoln Laboratory HPEC2008-13 AJH-HTN 09/24/08
Command-Line Example • Command-line and scripting interface provides debug access to FPGA container • Function core can be tested before final software is written # Send input data to the FPGA # Send input data to the FPGA w 192.168.0.2 1001 0x0 sample_input_data.bin w 192.168.0.2 1001 0x0 sample_input_data.bin # One-second delay (in ms) # One-second delay (in ms) P 1000 P 1000 # Read back the output data # Read back the output data r 192.168.0.2 1234 0x10000000 0x8000 result_data.dat r 192.168.0.2 1234 0x10000000 0x8000 result_data.dat Computer FPGA Board FPGA Regs Real-time Debug Application Utility Ports Function Core RAM Interface Controller RDMA Library Bus Container MIT Lincoln Laboratory HPEC2008-14 AJH-HTN 09/24/08
Integrated Container System Control Message UDP Message Ethernet Ethernet Protocol Encoder / Message PHY MAC Engine Decoder Decoding Address Data … Command Streaming DMA Controller WISHBONE Bus Wishbone Master Interface WISHBONE Interconnect Wishbone Slaves Register WISHBONE / Port Array File Memory Bridge Port 2 n -1 Port 0 Port 1 ….. Control Peripherals Mode Status “Sticky” Processing Memory Status Application Controller Lincoln Laboratory IP MIT Lincoln Laboratory HPEC2008-15 AJH-HTN 09/24/08
Message Decoding Control Message Xilinx UDP PHY Encoder / GigE (on-chip or Embedded Protocol Decoder off-chip) TEMAC Engine Address Data … Command Decoded Message • Inside the FPGA, the control message is decoded into a memory-mapped read or write command • Can mix and match components to implement different protocols MIT Lincoln Laboratory HPEC2008-16 AJH-HTN 09/24/08
WISHBONE Bus Interface Decoded Message Command Address Data … Streaming DMA Controller WISHBONE Master WISHBONE Interconnect WISHBONE Slaves = WISHBONE Bus Interface • Streaming DMA Controller (SDMAC) handles read/write commands by generating WISHBONE bus cycles • WISHBONE Interconnect routes transactions to destinations based on memory map • Transaction block sizes range from one word (four bytes) to 8k bytes MIT Lincoln Laboratory HPEC2008-17 AJH-HTN 09/24/08
Recommend
More recommend