DISTINGUISHED LECTURE SERIES

DISTINGUISHED LECTURE SERIES

Meet world-renowned researchers at lectures hosted by the School of Computing Science. These are open to students, researchers and those working in industry and education to share the latest leading-edge research. Admission is free of charge.

SPEAKERS

Lin Zhong

Date: Wednesday, October 16, 2024

Time: 11:30 AM -12:30 PM PST

Location:TASC 1 9204, Burnaby campus

Talk Title: How Classical Computer Scientists Can Help Advance Quantum Computing

View Seminar Recording

Abstract:  Quantum computing promises unprecedented computational power, yet in practice, quantum hardware remains highly error-prone due to the challenge of isolating quantum states from environmental interference. A primary method for building fault-tolerant quantum systems is through Quantum Error Correction (QEC) codes, which utilize many physical qubits to construct a more reliable logical qubit. During operation, certain qubits are measured, and these measurements are analyzed to determine the errors that have occurred—a classical process known as QEC decoding.

In this talk, we present our results in developing scalable QEC decoders for surface codes, a key class of QEC codes. We demonstrate that crucial QEC decoding algorithms can be parallelized to achieve decoding speeds faster than measurement at unprecedented scales—d=51 for Union Find decoders and d=33 for MWPM decoders. Our experience indicates that classical computer scientists can contribute to advancing quantum computing, even without a deep understanding of quantum mechanics, especially when help from quantum experts is available.

Biography: Lin Zhong is Joseph C. Tsai Professor of Computer Science with Yale University. He received his B.S and M.S. from Tsinghua University and Ph.D. from Princeton University. From 2005 to 2019, he was with Rice University. At Yale, he leads the Efficient Computing Lab to make computing, communication, and interfacing more efficient and effective. He and his students received the best paper awards from ACM MobileHCI, IEEE PerCom, ACM MobiSys (3), ACM ASPLOS and IEEE QCE. He is a recipient of the NSF CAREER Award, the Duncan Award from Rice University, the RockStar Award (2014) and Test of Time Award (2022) from ACM SIGMOBILE. He is a Fellow of IEEE and ACM. More information about his research can be found at http://www.yecl.org.

Maneesh Agrawala

Date: Thursday, September 5, 2024

Time: 11:30 AM -12:30 PM PST

Location:TASC 1 9204, Burnaby campus

Talk Title: Unpredictable Black Boxes are Terrible Interfaces

View Seminar Recording

Abstract:  Modern generative AI models are capable of producing surprisingly high-quality text, images, video and even program code. Yet, the models are black boxes, making it impossible for users to build a mental/conceptual model for how the AI works. Users have no way to predict how the black box transmutes input controls (e.g., natural language prompts) into the output text, images, video or code. Instead, users have to repeatedly create a prompt, apply the model to produce a result and then adjust the prompt and try again, until a suitable result is achieved. In this talk I’ll assert that such unpredictable black boxes are terrible interfaces and that they always will be until we can identify ways to explain how they work.  I’ll also argue that the ambiguity of natural language and a lack of shared semantics between AI models and human users are partly to blame. Finally I’ll suggest some approaches for improving the interfaces to the AI models.

Biography: Maneesh Agrawala is the Forest Baskett Professor of Computer Science and Director of the Brown Institute for Media Innovation at Stanford University. He works on computer graphics, human computer interaction and visualization. His focus is on investigating how cognitive design principles can be used to improve the effectiveness of audio/visual media. The goals of this work are to discover the design principles and then instantiate them in both interactive and automated design tools. Honors include an Okawa Foundation Research Grant (2006), an Alfred P. Sloan Foundation Fellowship (2007), an NSF CAREER Award (2007), a SIGGRAPH Significant New Researcher Award (2008), a MacArthur Foundation Fellowship (2009), an Allen Distinguished Investigator Award (2014) and induction into the SIGCHI Academy (2021). He was named an ACM Fellow in 2022. 

Scott Hudson

Date: Wednesday, April 10, 2024

Time: 11:30 AM -12:30 PM PST

Location:TASC 1 9204, Burnaby campus

Talk Title: The Future is Not What it Used to Be: Some Thoughts on Why the Fun Stuff in Technical Human-Computer Interaction is All Ahead of Us

Abstract:  In this talk I will consider how the future of technical Human-Computer Interaction is different than it used to be - what has changed, what has stayed the same, and mostly what should we do about it. Although it seems mundane, when we consider change in any sort of computing technology, we must consider "the elephant in the room" of Moore's law. I will present two quick thought experiments in this talk to try to convince you that you really don't understand the implications of Moore's law, that this really does matter, and that you should perhaps be thinking a little differently about your work as a result. (Spoiler alert: you are dramatically underestimating how much change in computing power is ahead of you, and probably under-utilizing it's potential for HCI advances.) Based on this, the core of my talk will consider what we might be missing in terms of how we go about our work, and talk about several exemplars of where a different view of a "new future" might lead in terms of specific research directions. With these exemplars as motivation, I will consider some more general thoughts about the methodologies we use in our work, and suggest a few ways we might consider thinking differently about how we go about our work.

Biography: Scott Hudson is a Professor of Human-Computer Interaction at Carnegie Mellon and previously held positions at the University of Arizona and Georgia Tech. He has published extensively in technical HCI. He recently received the ACM SIGCHI Lifetime Research Award. Previously he received the ACM SIGCHI Lifetime Service Award, was elected to the CHI Academy, and received the Allen Newell Award for Research Excellence at CMU. His research interests within HCI are wide ranging, but tend to focus on technical aspects of HCI. Much of his recent work has been considering advanced fabrication technologies such as new machines, processes, and materials for 3D printing, as well as computational knitting and weaving, and applications of mechanical meta-materials.

Elisa Bertino

Date: Thursday, March 14, 2024

Time: 11:30 AM -12:30 PM PST

Location:TASC 1 9204, Burnaby campus

Talk Title: Security of Cellular networks

Abstract: As the world moves to 5G cellular networks and next-generation is being envisioned , security is of paramount importance and new tools are needed to ensure it. In the talk, after discussing motivating trends in wireless communications, we present LTEInspector a model-based testing approach for cellular network protocols. LTEInspector combines a symbolic model checker and a cryptographic protocol verifier in the symbolic attacker model. Using it, we uncovered 10 new attacks along with 9 prior attacks, categorized into three abstract classes (i.e., security, user privacy, and disruption of service), in three procedures of 4G LTE. To ensure that the exposed attacks pose real threats and are indeed realizable in practice, 8 of the 10 new attacks have been validated and their accompanying adversarial assumptions have been tested in a real testbed. We then present results obtained by 5GReasoner, which extends the 5GReasoner to 5G protocols. We then overview on-going research projects.

Biography:Elisa Bertino is Samuel Conte professor of Computer Science at Purdue University. She serves as Director of the Purdue Cyberspace Security Lab (Cyber2Slab). Prior to joining Purdue, she was a professor and department head at the Department of Computer Science and Communication of the University of Milan. She has been a visiting researcher at the IBM Research Laboratory in San Jose (now Almaden), at Rutgers University, at Telcordia Technologies. She has also held visiting professor positions at the Singapore National University and the Singapore Management University.  Her recent research focuses on security and privacy of cellular networks and IoT systems, and on edge analytics for cybersecurity.  Elisa Bertino is a Fellow member of IEEE, ACM, and AAAS. She received the 2002 IEEE Computer Society Technical Achievement Award for “For outstanding contributions to database systems and database security and advanced data management systems”, the 2005 IEEE Computer Society Tsutomu Kanai Award for “Pioneering and innovative research contributions to secure distributed systems”, the 2019-2020 ACM Athena Lecturer Award, and the 2021 IEEE 2021 Innovation in Societal Infrastructure Award. She is currently serving as ACM Vice-President.

Martin Grohe

Date: Thursday, February 1, 2024

Time: 11:30 AM - 12:30 PM PST

Location: TASC 1 9204, Burnaby campus

Talk Title: The Logic of Graph Neural Networks

View Seminar Recording

Abstract: Graph neural networks (GNNs) are deep learning models for graph data that play a key role in machine learning on graphs. A GNN describes a distributed algorithm carrying out local computations at the vertices of the input graph. Typically, the parameters governing this algorithm are acquired through data-driven learning processes. After introducing the basic model, in this talk I will focus on the expressiveness of GNNs: which functions on graphs or their vertices can be computed by GNNs? Understanding the expressivenness will help us understand the suitability of GNNs for various application tasks and guide our search for possible extensions. Surprisingly, the expressiveness of GNNs has a clean and precise characterisation in terms of logic and Boolean circuits, that is, computation models of classical (descriptive) complexity theory.

Biography: Martin Grohe is a Professor for Theoretical Computer Science at RWTH Aachen University. He received his PhD in Mathematics at Freiburg University in 1994 and then spent a year as a visiting scholar at Stanford and the University of California at Santa Cruz. Before joining the Department of Computer Science of RWTH Aachen in 2012, he held positions at the University of Illinois at Chicago, the University of Edinburgh, and the Humboldt University at Berlin. His research interest are in theoretical computer science interpreted broadly, including logic, algorithms and complexity, graph theory, theoretical aspects of machine learning, and database theory.

2023

Pat Hanrahan

Date: Thursday, May 18, 2023

Time: 11:30 AM - 12:30 PM PST

Location: TASC 1 9204, Burnaby campus

Talk Title: Shading Languages and the Emergence of Programmable Graphics Systems

View Seminar Recording

Abstract: A major challenge in using computer graphics for movies and games is to create a rendering system that can create realistic pictures of a virtual world.  The system must handle the variety and complexity of the shapes, materials, and lighting that combine to create what we see every day.  The images must also be free of artifacts, emulate cameras to create depth of field and motion blur, and compose seamlessly with photographs of live action.

Pixar's RenderMan was created for this purpose, and has been widely used in feature film production.  A key innovation in the system is to use a shading language to procedurally describe appearance.  Shading languages were subsequently extended to run in real-time on graphics processing units (GPUs), and now shading languages are widely used in game engines.  The final step was the realization that the GPU is a data-parallel computer, and the the shading language could be extended into a general-purpose data-parallel programming language.  This enabled a wide variety of applications in high performance computing, such as physical simulation and machine learning, to be run on GPUs.  Nowadays, GPUs are the fastest computers in the world. This talk will review the history of shading languages and GPUs, and discuss the broader implications for computing.

Biography: Pat Hanrahan is the Canon Professor of Computer Science and Electrical Engineering in the Computer Graphics Laboratory at Stanford University.  His research focuses on rendering algorithms, graphics systems, and visualization.  Hanrahan received a Ph.D. in biophysics from the University of Wisconsin-Madison in 1985.  As a founding employee at Pixar Animation Studios in the 1980s, Hanrahan led the design of the RenderMan Interface Specification and the RenderMan Shading Language.  In 1989, he joined the faculty of Princeton University.  In 1995, he moved to Stanford University.  More recently, Hanrahan served as a co-founder and CTO of Tableau Software.  He has received three Academy Awards for Science and Technology, the SIGGRAPH Computer Graphics Achievement Award, the SIGGRAPH Stephen A. Coons Award, and the IEEE Visualization Career Award.  He is a member of the National Academy of Engineering and the American Academy of Arts and Sciences.  In 2019, he received the ACM A. M. Turing Award.

KEVIN MURPHY

Time: 11:30 AM - 12:30 PM PST

Location: TASC 1 9204, Burnaby campus

Talk Title: The four pillars of machine learning

View Seminar Recording

Abstract: I will present a unified perspective on the field of machine learning research, following the structure of my recent book, "Probabilistic Machine Learning: Advanced Topics" (https://probml.github.io/book2). In particular, I will discuss various models and algorithms for tackling the following four key tasks, which I call the "4 pillars of ML": prediction, control, discovery and generation. For each of these tasks, I will also briefly summarize a few of my own contributions, including methods for  robust prediction under distribution shift, statistically efficient online decision making, discovering hidden regimes in high-dimensional time series data and for generating high-resolution images.

Biography: Kevin was born in Ireland, but grew up in England. He got his BA from U. Cambridge, his MEng from U. Pennsylvania, and his PhD from UC Berkeley. He then did a postdoc at MIT, and was an associate professor of computer science and statistics at the University of British Columbia in Vancouver, Canada, from 2004 to 2012. After getting tenure, he went to Google in California on his sabbatical and then ended up staying. He currently runs a team of about 8 researchers inside of Google Brain; the team works on generative models, Bayesian inference, ML methods that go beyond the iid assumption, and various other topics. Kevin has published over 125 papers in refereed conferences and journals, as well 3 textbooks on machine learning published in 2012, 2022 and 2023 by MIT Press. (The 2012 book was awarded the DeGroot Prize for best book in the field of Statistical Science.) Kevin was also the (co) Editor-in-Chief of JMLR 2014--2017.

2022

Mario Szegedy

Date: Thursday, December 15, 2022

Time: 2:00 PM - 3:00 PM PST

Location: SFU Big Data Hub, Room 1900, Burnaby campus

Zoom details for online participation: 

https://sfu.zoom.us/j/89588174259?pwd=bFVHNlFjT2dueTJVMFZvd0dOcGRlUT09

Meeting ID: 895 8817 4259
Password: 378277

Talk Title: Entanglement as a resource.

Abstract: In the main part of the talk, I discuss basic properties of quantum entanglement: How does quantum entanglement differ from classical random  correlation? Does it make sense to ask, when in the quantum  teleportation protocol the teleportation actually happens? We give an  explanation of quantum measurement using  heat analogy; We talk a little bit about controlling and Localizing  entanglement. In the very end of the talk I mention a recent result,  joint with Sergei Bravyi, Yash Sharma and Ronald de Wolf, entitled  "Generating k EPR-pairs from an n-party resource state".

Biography:  Mario Szegedy is a Hungarian-American computer scientist, professor of computer science at Rutgers University. He received his Ph.D. in computer science in 1989 under the supervision of Laszlo Babai from the University of Chicago. He held a Lady Davis Postdoctoral Fellowship at the Hebrew University, Jerusalem (1989–90), a postdoc at the University of Chicago, 1991–92, and a postdoc at Bell Laboratories (1992). He was awarded the Gödel Prize twice, in 2001 and 2005, for his work on computational complexity including probabilistically checkable proofs and on the space complexity of approximating the frequency moments in streamed data.

Magdalena Balazinska

Date: Thursday, November 24, 2022

Time: 11:30 AM - 12:30 PM PST

Location: TASC I 9204

Talk Title: Storing and Querying Video Data.

View Seminar Recording

Abstract: The proliferation of inexpensive high-quality cameras coupled with recent advances in machine learning and computer vision have enabled new applications on video data. This in turn has renewed interest in video data management systems (VDMSs). In this talk, we explore how to build a modern data management system for video data. We focus, in particular, on the storage manager and present several techniques to store video data in a way that accelerates queries over that data. We then move up the stack and discuss different types of data models that can be exposed to applications. Finally, we discuss how it's possible to support users in expressing queries to find events of interest in a video database.

Biography: Magdalena Balazinska is Professor, Bill & Melinda Gates Chair, and Director of the Paul G. Allen School of Computer Science & Engineering at the University of Washington. Magdalena's research interests are in the field of database management systems. Her current research focuses on data management for data science, big data systems, cloud computing, and image and video analytics. Prior to her leadership of the Allen School, Magdalena was the Director of the eScience Institute, the Associate Vice Provost for Data Science, and the Director of the Advanced Data Science PhD Option. She also served as Co-Editor-in-Chief for Volume 13 of the Proceedings of the Very Large Data Bases Endowment (PVLDB) journal and as PC co-chair for the corresponding VLDB'20 conference. Magdalena is an ACM Fellow. She holds a Ph.D. from the Massachusetts Institute of Technology (2006). Shortly after her arrival at the University of Washington, she was named a Microsoft Research New Faculty Fellow (2007). Magdalena received the inaugural VLDB Women in Database Research Award (2016) for her work on scalable distributed data systems. She also received an ACM SIGMOD Test-of-Time Award (2017) for her work on fault-tolerant distributed stream processing and a 10-year most influential paper award (2010) from her earlier work on reengineering software clones.

Steven Feiner

Date: Thursday, October 27, 2022

Time: 11:00 AM - 12:00 PM PST

Location: TASC1 9204

Talk Title: Cueing Action in Augmented Reality and Virtual Reality

Abstract: For over fifty years, researchers have investigated how augmented reality (AR) and virtual reality (VR) can help people perform tasks, training them in advance or assisting them on the fly. I will describe work our lab has done in this domain, addressing collaboration between a remote expert and a local technician, as well as system-provided assistance of an individual user. In these tasks, users are given information about the current step (cues), either immediately before or while doing it. Many tasks, in the real world and in AR and VR, also use precues about what to do after the current step. We have been exploring how precueing multiple steps in AR and VR can influence performance, for better or for worse, in ordered sequential tasks ranging from simple path following to translating and rotating physical objects, and in unordered bimanual tasks with potentially concurrent actions. I will present some of our results on the effectiveness of different numbers and types of cues and precues and the visualizations through which they are communicated. And I will conclude with some thoughts about future directions.

Biography: Steven Feiner is a Professor of Computer Science at Columbia University, where he directs the Computer Graphics and User Interfaces Lab. His lab has been conducting AR and VR research for over 25 years, designing and evaluating novel 3D interaction and visualization techniques, creating the first outdoor mobile AR system using a see-through head-worn display and GPS, and pioneering experimental applications of AR and VR to a wide range of fields. Steve is a Fellow of the ACM and the IEEE and a member of the CHI Academy and the IEEE VR Academy. He is the recipient of the ACM SIGCHI Lifetime Research Award, the IEEE ISMAR Career Impact Award, and the IEEE VGTC Virtual Reality Career Award. Together with his students, he has won the IEEE ISMAR Impact Paper Award, the ISWC Early Innovator Award, and the ACM UIST Lasting Impact Award. Steve has served as general chair or program chair for over a dozen ACM and IEEE conferences and is coauthor of two editions of Computer Graphics: Principles and Practice.

Onur Mutlu

Date: Thursday, September 22, 2022

Time: 11:00 PM - 12:00 PM

Location: Big Data Hub Presentation Studio (ASB 10900)

Talk Title: Memory-Centric Computing

Abstract: Computing is bottlenecked by data. Large amounts of application data overwhelm storage capability, communication capability, and computation capability of the modern machines we design today. As a result, many key applications' performance, efficiency and scalability are bottlenecked by data movement. In this lecture, we describe three major shortcomings of modern architectures in terms of 1) dealing with data, 2) taking advantage of the vast amounts of data, and 3) exploiting different semantic properties of application data. We argue that an intelligent architecture should be designed to handle data well. We show that handling data well requires designing architectures
based on three key principles: 1) data-centric, 2) data-driven, 3) data-aware. We give several examples for how to exploit each of these principles to design a much more efficient and high performance computing system. We especially discuss recent research that aims to fundamentally reduce memory latency and energy, and practically enable
computation close to data, with at least two promising novel directions: 1) processing using memory, which exploits analog operational properties of memory chips to perform massively-parallel operations in memory, with low-cost changes, 2) processing near memory, which integrates sophisticated additional processing capability in memory controllers, the logic layer of 3D-stacked memory technologies, or memory chips to enable high memory bandwidth and low memory latency to near-memory logic. We show both types of architectures can enable orders of magnitude improvements in performance and energy consumption of many important workloads, such as graph analytics, database systems, machine learning, video processing. We discuss how to enable adoption of such fundamentally more intelligent architectures, which we believe are key to efficiency, performance, and sustainability. We conclude with some guiding principles for future computing architecture and system designs.

A short accompanying paper, which appeared in DATE 2021, can be found here and serves as recommended reading: https://people.inf.ethz.ch/omutlu/pub/intelligent-architectures-for-intelligent-computingsystems-invited_paper_DATE21.pdf

A longer overview & survey of modern memory-centric computing can be found here and also serves as recommended reading: "A Modern Primer on Processing in Memory" https://people.inf.ethz.ch/omutlu/pub/ModernPrimerOnPIM_springer-emerging-computing-bookchapter21.pdf

Biography: Onur Mutlu is a Professor of Computer Science at ETH Zurich. He is also a faculty member at Carnegie Mellon University, where he previously held the Strecker Early Career Professorship. His current broader research interests are in computer architecture, systems, hardware security, and bioinformatics. A variety of techniques he, along with his group and collaborators, has invented over the years have influenced industry and have been employed in commercial microprocessors and memory/storage systems. He obtained his PhD and MS in ECE from the University of Texas at Austin and BS degrees in Computer Engineering and Psychology from the University of Michigan, Ann Arbor. He started the Computer Architecture Group at Microsoft Research (2006-2009), and held various product and research positions at Intel Corporation, Advanced Micro Devices, VMware, and Google. He received the Intel Outstanding Researcher Award, NVMW Persistent Impact Prize, IEEE High Performance Computer Architecture Test of Time Award, the IEEE Computer Society Edward J. McCluskey Technical Achievement Award, ACM SIGARCH Maurice Wilkes Award, the inaugural IEEE Computer Society Young Computer Architect Award, the inaugural Intel Early Career Faculty Award, US National Science Foundation CAREER Award, Carnegie Mellon University Ladd Research Award, faculty partnership awards from various companies, and a healthy number of best paper or "Top Pick" paper recognitions at various computer systems, architecture, and security venues. He is an ACM Fellow "for contributions to computer architecture research, especially in memory systems", IEEE Fellow for "contributions to computer architecture research and practice", and an elected member of the Academy of Europe (Academia Europaea). His computer architecture and digital logic design course lectures and materials are freely available on YouTube (https://www.youtube.com/OnurMutluLectures), and his research group makes a wide variety of software and hardware artifacts freely available online (https://safari.ethz.ch/). For more information, please see his webpage at https://people.inf.ethz.ch/omutlu/.

2019

Dan Suciu

Date: Thursday, October 24, 2019

Time: 2:30-3:30 p.m.

Location: KEY, Big Data Presentation Studio (formerly known as IRMACS)
Applied Sciences Building, Room 10900

Talk Title: Probabilistic Databases: A Dichotomy Theorem and Limitations of DPLL Algorithms

Abstract: Probabilistic Databases (PDBs) extend traditional relational databases by annotating each record with a weight, or a probability. The query evaluation problem, "given a query (a First Order Logic Sentence), compute its probability", is an instance of the weighted model counting problem of Boolean formulas, and has applications to Markov Logic Networks and to other Statistical Relational Models. I will present in this talk two results. The first is a dichotomy theorem stating that, for each universally (or existentially) quantified sentence without negation, computing its probability is either #P-hard, or is in PTIME in the size of the probabilistic database. The second result is a limitation of Davis-Putnam-Logemann-Loveland (DPLL) algorithms: there exists FOL sentences that can be computed in PTIME over probabilistic databases (using lifted inference) yet every DPLL algorithm, even extended with caching and with components, takes exponential time.

Biography: Dan Suciu is a Professor in Computer Science at the University of Washington. He received his Ph.D. from the University of Pennsylvania in 1995, was a principal member of the technical staff at AT&T Labs and joined the University of Washington in 2000. Suciu is conducting research in data management, with an emphasis on topics related to Big Data and data sharing, such as probabilistic data, data pricing, parallel data processing, data security. He is a co-author of two books Data on the Web: from Relations to Semistructured Data and XML, 1999, and Probabilistic Databases, 2011. He is a Fellow of the ACM, holds twelve US patents, received the best paper award in SIGMOD 2000, SIGMOD 2019 and ICDT 2013, the ACM PODS Alberto Mendelzon Test of Time Award in 2010 and in 2012, the 10 Year Most Influential Paper Award in ICDE 2013, the VLDB Ten Year Best Paper Award in 2014, and is a recipient of the NSF Career Award and of an Alfred P. Sloan Fellowship. Suciu serves on the VLDB Board of Trustees, and is an associate editor for the Journal of the ACM, VLDB Journal, ACM TWEB, and Information Systems and is a past associate editor for ACM TODS and ACM TOIS. Suciu's PhD students Gerome Miklau, Christopher Re and Paris Koutris received the ACM SIGMOD Best Dissertation Award in 2006, 2010, and 2016 respectively, and Nilesh Dalvi was a runner up in 2008.

2018

Klara Nahrstedt

Date: Thursday, October 11 , 2018

Time: 2:30-3:30 p.m.

Location:  KEY, Big Data Presentation Studio (formerly known as IRMACS)

Applied Sciences Building, Room 10900

Talk Title: Real-Time and Trustworthy Micro-service Edge-Cloud Operating Infrastructure for Scientific Workflows

Abstract: Studies suggest that it typically takes 20 years to go from the discovery of new materials to fabrication of new and next generation devices based on new materials.  Many other scientific domains show similar delays. These scientific cycles must be shortened, and it will require a major transformation in how we collect digital data about physical artifacts from scientific instruments, and how we make the digital data available to computational tools for developing new materials, fabricating new devices, and speeding up the discoveries in science. In this talk we present a real-time micro-service operating infrastructure for scientific workflows, named 4CeeD that focuses on the immense potential of capturing, curating, correlating and coordinating digital data in a real-time and trusted manner before fully archiving and publishing them for wide access and sharing.

We will discuss novel cloud-based services, and algorithms that are an integral part of 4CeeD for collecting and correlating data from microscopes and fabrication instruments, modeling of scientific workflows in clouds as micro-services, self-adaptive micro-service-based infrastructure for heterogeneous scientific workflows, and role of edge computing. Specifically, we will show that our micro-service based approach helps to improve flexibility of workflow composition and execution, and enable fine-grained scheduling at task level, considering task sharing across multiple workflows. Furthermore, the self-adaptive micro-service management employs integration of feedback control, deep learning and optimization frameworks to offer resource adaptation without any advanced knowledge of workflow structures. We will also discuss a unique usage of edge computing to enable integration of aging instruments to be integrated into the micro-service operating infrastructure. The evaluation of the 4CeeD system shows robust prediction of resource usage and adaptation under dynamic workloads, real-time service to users to offload, curate and analyze data, and access to diverse instruments.

Joint work with Phuong Ngyuen, Tarek Elgamal, Steve Konstanty, Todd Nicholson

Biography: Klara Nahrstedt is the Ralph and Catherine Fisher Professor in the Computer Science Department, and Director of Coordinated Science Laboratory in the College of Engineering at the University of Illinois at Urbana-Champaign. Her research interests are directed toward tele-immersive systems, end-to-end Quality of Service (QoS) and resource management in large scale distributed systems and networks, and real-time security and privacy in cyber-physical systems such as power grid. She is the co-author of multimedia books `Multimedia: Computing, Communications and Applications' published by Prentice Hall, and ‘Multimedia Systems’ published by Springer Verlag. She is the recipient of the IEEE Communication Society Leonard Abraham Award for Research Achievements, University Scholar, Humboldt Award, IEEE Computer Society Technical Achievement Award, ACM SIGMM Technical Achievement Award, and the former chair of the ACM Special Interest Group in Multimedia. She was the general co-chair and TPC co-chair of many international conferences including ACM Multimedia, IEEE Percom, IEEE IOTDI and others. Klara Nahrstedt received her Diploma in Mathematics from Humboldt University, Berlin, Germany in 1985. In 1995 she received her PhD from the University of Pennsylvania in the Department of Computer and Information Science. She is ACM Fellow, IEEE Fellow, and Member of the German National Academy of Sciences (Leopoldina Society).

Moshe Y. Vardi

Date: Thursday, January 11 , 2018

Time: 2:30-3:30 p.m.

Location: TASC 1 Building, Room T9204

Talk Title: The Automated-Reasoning Revolution: From Theory to Practice and Back

Abstract: For the past 40 years computer scientists generally believed that NP-complete problems are intractable. In particular, Boolean satisfiability (SAT), as a paradigmatic automated-reasoning problem, has been considered to be intractable. Over the past 20 years, however, there has been a quiet, but dramatic, revolution, and very large SAT instances are now being solved routinely as part of software and hardware design. In this talk I will review this amazing development and show how automated reasoning is now an industrial reality. 

I will then describe how we can leverage SAT solving to accomplish other automated-reasoning tasks. Counting the the number of satisfying truth assignments of a given Boolean formula or sampling such assignments uniformly at random are fundamental computational problems in computer science with applications in software testing, software synthesis, machine learning, personalized learning, and more. While the theory of these problems has been thoroughly investigated since the 1980s, approximation algorithms developed by theoreticians do not scale up to industrial-sized instances. Algorithms used by the industry offer better scalability, but give up certain correctness guarantees to achieve scalability. We describe a novel approach, based on universal hashing and Satisfiability Modulo Theory, that scales to formulas with hundreds of thousands of variables without giving up correctness guarantees.

Biography: Moshe Y. Vardi is the George Distinguished Service Professor in Computational Engineering and Director of the Ken Kennedy Institute for Information Technology at Rice University. He is the recipient of three IBM Outstanding Innovation Awards, the ACM SIGACT Goedel Prize, the ACM Kanellakis Award, the ACM SIGMOD Codd Award, the Blaise Pascal Medal, the IEEE Computer Society Goode Award, the EATCS Distinguished Achievements Award, and the Southeastern Universities Research Association's Distinguished Scientist Award. He is the author and co-author of over 500 papers, as well as two books: "Reasoning about Knowledge" and "Finite Model Theory and Its Applications". He is a Fellow of the Association for Computing Machinery, the American Association for Artificial Intelligence, the American Association for the Advancement of Science, the European Association for Theoretical Computer Science, the Institute for Electrical and Electronic Engineers, and the Society for Industrial and Applied Mathematics. He is a member of the US National Academy of Engineering and National Academy of Science, the American Academy of Arts and Science, the European Academy of Science, and Academia Europaea. He holds honorary doctorates from the Saarland University in Germany, Orleans University in France, UFRGS in Brazil, and the University of Liege in Belgium. He is currently a Senior Editor of of the Communications of the ACM, after having served for a decade as Editor-in-Chief.

2017

Margaret Burnett

Date: Thursday,  November 9 , 2017

Time: 2:30-3:30 p.m.

Location: TASC 1 Building, Room T9204

Talk Title: Gender-Inclusive Software

Abstract: Gender inclusiveness in software companies is receiving a lot of attention these days, but it overlooks a potentially critical factor: software itself. Research into how individual differences cluster by gender shows that males and females often work differently with software for problem-solving (e.g., tools for programming, debugging, spreadsheet modeling, end-user programming, game-based learning, visualizing information, etc.). In this talk, I'll present a method we call GenderMag to reveal gender biases in user-facing software.  At the core of the method are 5 facets of gender differences drawn from a large body of foundational work on gender differences from computer science, psychology, education, communications, and women's studies. I'll also present highlights from real-world investigations of software practitioners' ability to identify gender-inclusiveness issues,  using GenderMag,  in software they create/maintain. Results from the field studies were that software practitioners identified a surprisingly high number of gender-inclusiveness issues in their own software. We present these results and more, along with tales from the trenches on what it’s like to use GenderMag, where the pitfalls lie, and all the things we are still in the process of learning about it. 

Biography: Margaret Burnett is a Distinguished Professor at Oregon State Unversity, an ACM Distinguished Scientist, and a CHI Academy member. Her research on gender inclusiveness in software -- especially in software tools for programming and problem-solving -- spans over 10 years. Prior to this work, most gender investigations into software had addressed only gender-targeted software, such as video games for girls. Burnett and her team systematically debunked misconceptions of gender neutrality in a variety of software platforms, and then devised software features that help avert the identified problems. She has reported these results in over 30 publications, and has presented keynotes and invited talks on this topic in 8 countries. She serves on a variety of HCI and Software Engineering committees and editorial boards, and on the Academic Alliance Advisory Board of the U.S. National Center for Women & Information Technology (NCWIT). More on Burnett can be found at: http://web.engr.oregonstate.edu/~burnett/

Dr. Jiawei Han

Date: Thursday, October 20, 2017

Time: 3:00-4:00 p.m.

Location: TASC 1 Building, Room T9204

Talk Title: Mining Structures from Massive Text Data: A Data-Driven Approach

Abstract: The real-world big data are largely unstructured, interconnected, and in the form of natural language text.  One of the grand challenges is to turn such massive data into structured networks and actionable knowledge.  We propose a text mining approach that requires only distant or minimal supervision but relies on massive data.  We show quality phrases can be mined from such massive text data, types can be extracted from massive text data with distant supervision, and entities/attributes/values can be discovered by meta-path directed pattern discovery.  Finally, we propose a data-to-network-to-knowledge paradigm, that is, first turn data into relatively structured information networks, and then mine such text-rich and structure-rich networks to generate useful knowledge.  We show such a paradigm represents a promising direction at turning massive text data into structured networks and useful knowledge.

Biography: Jiawei Han is Abel Bliss Professor in the Department of Computer Science, University of Illinois at Urbana-Champaign.  He was a professor in the School of Computing Science, SFU, from 1987 to 2001.  He has been researching into data mining, information network analysis, database systems, and data warehousing, with over 800 journal and conference publications. He has chaired or served on many program committees of international conferences in most data mining and database conferences.  He also served as the founding Editor-In-Chief of ACM Transactions on Knowledge Discovery from Data and the Director of Information Network Academic Research Center supported by U.S. Army Research Lab (2009-2016), and is the co-Director of KnowEnG, an NIH funded Center of Excellence in Big Data Computing since 2014.  He is Fellow of ACM, Fellow of IEEE, and received 2004 ACM SIGKDD Innovations Award, 2005 IEEE Computer Society Technical Achievement Award, and 2009 M. Wallace McDowell Award from IEEE Computer Society.  His co-authored book "Data Mining: Concepts and Techniques" has been adopted as a textbook popularly worldwide.