Professor Monica Lam Sin Ling
林倩玲敎授

Class of 1977, Form 6 - Director of Stanford’s Virtual Assistant Programme 

1977屆, 中六畢業 — 史丹福大學虛擬助理硏究總監


Monica S. Lam studied lower sixth at St. Paul’s College during the 1976–77 academic year. Since 1988, she has been a professor at Stanford University’s Computer Science Department. Presently, she is also the faculty director of Stanford’s Open Virtual Assistant Lab (Oval).

Lam’s expertise lies in computer systems and languages, both computing languages and natural language (i.e. human language). Her early work was on compilers, which translate software code into a format understandable by hardware. As parallel processing became more common among research supercomputers in the 1990s (and later in consumer systems by the mid-2000s), Lam specialised in optimising parallelisation (dividing up and distributing a workload to multiple systems).* Her pioneering theory on affine partitioning has enabled more efficient running of code.† She was part of the team behind SUIF, a research compiler widely used in academia. Lam also co-authored Compilers: Principles, Techniques, and Tools (second edition, 2006), aka the ‘Dragon Book’, a highly popular computer science textbook.

More recently, Lam has been leading and championing an open-source virtual assistant initiative through Oval, which involves interpreting and processing natural language via language models and machine learning. Anxious about an emerging corporate oligopoly among current picks such as Alexa (Amazon), Google Assistant, and Siri (Apple), her work on Oval prioritises open access (to code, databases, and documentation). By extension, since everyone can know exactly what the assistant is doing, and are free to build their own virtual assistant using Oval’s tools, there is an extra emphasis on personal control and privacy.

As she wrote in a 2004 retrospective on SUIF, practical application is a core theme in her work: ‘In fact, I think that the success of this research rested partly on the fact that each step along the way addressed some real-life problem. This helped us focus on important issues and ensured that the basic approach was correct.

Click here for an interview with Lam, specially prepared as part of this exhibition.

* Contrast parallel processing with serial processing, where a workload is handled by a single system in sequence. While parallel processing is more efficient, serial processing is considerably easier to program for. This made serial processing prevalent for early computer programs (up to the late 1980s). Part of the difficulty in coding for parallel processing comes from figuring out how to divide up the work optimally — the ideal scenario would be to make use of all available processors/systems while minimising synchronisation (every individual system needing to update task progression for all other systems).

† Affine partitioning theory makes use of the geometric concept of affine transformations. An affine transformation preserves ratios of distances by means of rotation, reflection, scaling, etc.; an example would be similar triangles. Affine partitioning analyses software code and performs such transformations where appropriate, partitioning (dividing) it into parallel processes. It is used in loop transformation algorithms.

Click here for an interview with Professor Lam, specially prepared as part of this exhibition.