Group testing was originally designed for the Selective Service to test inductees for syphilis. It appears in many forms, including coin-weighing problems, experimental designs, and public health. The goal of the group testing design problem is to design an optimally efficient set of tests of items so that the test results contain enough information to determine a small subset of items of interest. With the emergence of new computational applications that monitor large volumes of streaming data or that acquire a reduced number of measurements of large data sets, both the design problem and its associated algorithmic problem are crucial for efficiently extracting a small amount of useful information from a voluminous data set, for designing efficient high throughput biological screens, and for reducing the number of experiments necessary for identifying items of biological interest.
The aim of this workshop is to bring together researchers in a diverse mixture of theoretical computer science (including streaming and sublinear algorithms, property testing, lower bounds, and space complexity results), bioinformatics and analysis of large genetic data sets, information and coding theory, and high throughput biological screening.