Agenda
9:00 a.m.
Opening Remarks
9:01 a.m.
Keynote:
Preserving Privacy through Processing Encrypted Data
[talk]
Abstract:
Secure Function Evaluation (SFE) allows an interested party to evaluate a function over private data without learning anything about the inputs other than the outcome of this computation. This offers a strong privacy guarantee: SFE enables, e.g., a medical researcher, a statistician, or a data analyst, to conduct a study over private, sensitive data, without jeopardizing the privacy of the study's participants (patients, online users, etc.). Nevertheless, applying SFE to "big data" poses several challenges, most significantly in the excessive processing time for applications.
In this talk, I describe Garbled Circuits (GCs), a technique for implementing SFE that can be applied to any problem that can be described as a Boolean circuit. GC is a particularly good application to accelerate with FPGAs due to the good match between GC implementations and FPGA circuits. As our goal is to use GC for extremely large problems, including machine learning algorithms, we propose to address these problems by running GCs on clusters of machines equipped with FPGAs in the datacenter to accelerate the processing. In this talk, I will present our progress and challenges with this approach.
Miriam Leeser, Northeastern University
Bio:
Miriam Leeser is Professor of Electrical and Computer Engineering at Northeastern University, currently on sabbatical at Maynooth University in Ireland. She received the prestigious Fulbright Scholar Award in 2018 to support this sabbatical. She has been doing research in FPGAs for decades, and has done ground breaking research in floating point implementations, unsupervised learning and most recently privacy preserving data processing. She has been a faculty member at Northeastern since 1996, where she is head of the Reconfigurable Computing Laboratory and a member of the Computer Engineering group. She is a senior member of ACM, IEEE and SWE.
9:40 a.m.
Invited Talk:
Bringing FPGAs to HPC production systems and codes
[talk]
Abstract:
FPGA architectures and development tools have made great strides towards a platform for high-performant and energy-efficient computing, competing head to head with other processor and accelerator technologies. While we have seen the first large-scale deployments of FPGAs in public and private clouds, FPGAs still have to make inroads in general purpose HPC systems. At the Paderborn Center for Parallel Computing, we are at the forefront of this development and have recently put "Noctua" our first HPC cluster with FPGAs into production.
In this talk, I will share some of the experiences we made on our journey from the planning, to the procurement to the installation of the Noctua cluster and highlight critical aspects for FPGAs and how we addressed them. Further, I will present results from on-going work to port libraries and MPI-parallel HPC codes to the 32 Intel Stratix 10 FPGA boards in our cluster.
Christian Plessl, Paderborn University, Germany
Bio:
Christian Plessl is professor for High-Performance IT Systems at Paderborn University, Germany. He has been involved in numerous research projects studying reconfigurable architectures, design flows, runtime systems and the application of FPGAs in HPC. His research has been recognized with several awards, e.g., the ReConFig Best Paper Awards in 2014 and 2012 and the FPL significant paper award in 2015. He is also the director of the Paderborn Center for Parallel Computing (PC
2), which is Paderborn University's HPC center providing computing resources for computational sciences at Paderborn University and Germany-wide. Leveraging the longstanding expertise in FPGA acceleration in Paderborn, PC
2 has recently deployed its first production HPC cluster with FPGAs.
10:10 a.m.
Coffee break
10:30 a.m.
"SimBSP: Enabling RTL Simulation for Intel FPGA OpenCL Kernels"
[talk]
[abstract]
Ahmed Sanaullah and Martin C. Herbordt, Boston University
10:45 a.m.
Invited Talk:
Scalable FPGA Deployments for HPC and DC applications
Abstract:
FPGAs have recently found their way and niche in large-scale data-center (DC) applications, eg, for endpoint encryption/compression, video transcoding, and genomics applications. We present two research projects that address two remaining roadblocks on the way to scalable performance and energy-efficiency gains: The cloudFPGA project proposes a disagregated FPGA architecture for scale-out applications and near-memory computing project uses openCAPI-attached FPGAs to tear down the "memory wall" for HPC and DC applications.
Christoph Hagleitner, IBM Research Zurich
Bio:
Christoph Hagleitner leads the "Heterogeneous Cognitive Computing Systems" group at the IBM Research - Zurich Lab (ZRL) in Ruschlikon, Switzerland. The group focuses on heterogeneous computing systems for cloud datacenters and HPC. Applications include big-data analytics and cognitive computing. He obtained a diploma degree in Electrical Engineering from ETH, Zurich, Switzerland in 1997 and and a Ph.D. degree for a thesis on CMOS-integrated Microsensors from ETH, Zurich, Switzerland in 2002. In 2003 he joined IBM Research to work on the system architecture of a novel probe-storage device (“millipede”-project). In 2008, he started to build up a new research group in the area of accelerator technologies. The team initially focused on on-chip accelerator cores and gradually expanded its research to heterogeneous systems and their applications.
11:10 a.m.
"First steps in porting the LFRic Weather and Climate model to the FPGAs of the EuroExa architecture"
[talk]
[abstract]
Mike Ashworth, Graham Riley, Andrew Attwood, and John Maher, University of Manchester
11:25 a.m.
"Integrating network-attached FPGAs into the cloud using partial reconfiguration"
[talk]
[abstract]
Burkhard Ringlein, Francois Abel, Alexander Ditter, Christoph Hagleitner, and Dietmar Fey, Friedrich-Alexander-University and IBM Research
11:40 a.m.
Invited Talk:
Accelerating Intelligence
[talk]
Abstract: Massive amounts of data are being consumed and processed to drive business. The exponential increase in data has not been matched by the computational power of processors. This has led to the rise of accelerators. However, big data algorithms for ETL, ML, AI, and DL are evolving rapidly and/or have significant diversity. These moving targets are poor candidates for ASICs, but match the capabilities and flexibility of FPGAs. Furthermore, FPGAs provide a platform to move computation to the data, away from the CPU, by providing computation at line rate at the network and/or storage. Bigstream is bridging the gap between high-level big data frameworks and accelerators using our Hyper-acceleration Layer built on top of SDAccel. In this talk, we will describe the Bigstream Hyper-acceleration that automatically provides computational acceleration, by the CPU, storage, or network, for big data platforms with zero code change.
John Davis, Bigstream Networks
12:05 p.m.
"The MANGO Process for Designing and Programming Multi-Accelerator Multi-FPGA Systems"
[talk]
[abstract]
Rafael Tornero, José Flich, José María Martínez, Tomás Picornell, Universitat Politècnica de Val, and Vincenzo Scotti, Technical University of Valencia and University of Naples Federico II
12:20 p.m.
"Stream Computing of Lattice-Boltzmann Method on Intel Programmable Accelerator Card"
[talk]
[abstract]
Takaaki Miyajima, Tomohiro Ueno, and Kentaro Sano, RIKEN Center for Computational Science
12:35 p.m.
Adjorn