Operations Research Seminars
Thursdays at 2:00pm in ASB 10908 (in person) and via Zoom.
Please join us for this weekly seminar on a wide variety of topics under the umbrella of operations research, in association with CORDS (Centre for Operations Research and Decision Sciences), and The Department of Mathematics, Simon Fraser University. Seminars are usually co-hosted with the COCANA group at UBC-Okanagan. Our aim is to meet and discuss Operations Research topics.
Unless otherwise noted, talks take place on Thursdays at 2:00pm on in-person in ASB 10908 as well as on Zoom. The Zoom link for online seminars is distributed to the OR Seminar email list. To be added to the email list or receive the Zoom link directly, please contact the seminar organizer.
Current SFU organizers: Tamon Stephen and Sandy Rutherford.
2022-2023 Talks
Please find our 2022-23 seminar information on researchseminars.org.
2021-2022 Talks
Friday, April 8, 11:00 a.m. | M.Sc. defence: Samantha Zimmerman |
Team-based Staffing Optimization for an Urgent and Primary Care Centre | |||
Urgent and primary care centres (UPCCs) provide both walk-in services for urgent healthcare needs and booked appointments for longitudinal care. UPCCs utilize multi-disciplinary teams of healthcare professionals who collaborate to provide client care. In my research, I developed new algorithms for team-based staffing at a UPCC in Vancouver, British Columbia. To model client access indicators, I built a discrete event simulation based on the UPCC operational profile and client visit data. My analysis compares two algorithms that determine the minimum staffing levels required to meet targets on simulated client access¾given by the time-dependent proportion of simulated clients receiving care as opposed to leaving due to a prolonged wait. I extended an iterative staffing algorithm to identify small-interval staffing requirements, which I combined with integer programming to determine shift-based staffing. I compared this with simulation optimization of shift-based staffing. My modelling approach gives staffing recommendations that efficiently improve access to care. | |||||
April 4 | Jesús De Loera, UC-Davis | The Geometry of All Pivot Rules for the Simplex Method | |||
The simplex method is one of the most famous and popular algorithms in optimization. Purely geometrically, a linear program (LP) is a polyhedron together with an orientation of its graph. The famous simplex method selects a path from an initial vertex to the sink (optimum) and thus determines an arborescence (of shortest monotone paths). The engine of any version of the simplex method is a pivot rule that selects the outgoing arc for a current vertex. Pivot rules come in many forms and types, but after 80 years we are still ignorant whether there is one that can make the simplex method run in polynomial time. Motivated by this question we tried to understand the structure of all pivot rules for a linear program. Classic result in parametric linear optimization explain the changes of objective function, eg the orientation and Sink and for a pivot rule its arborescence. I present a type of parametric analysis for all pivot rules belonging to a certain class, memoryless pívot rules, we associate polytopes that parametrize memoryless pívot rules of a given LP, an attempt to get a geometric/topological picture of a “space of all pivot rules of an LP”. This story is related to performance of pivot rules, parametric linear programs, shadow-vertex-rules, and several classic polyhedral geometry and is joint work with Alex Black, Niklas Lütjeharms, and Raman Sanyal. |
|||||
March 28 | Joe Paat, UBC-Vancouver (Sauder) | A Colorful Steinitz Lemma Applied to Block Integer Programs | |||
Block integer programs (IPs) model a wide range of problems including those in social choice, scheduling, and stochastic optimization. Recently, algorithms for block IPs have been improved by using the so-called Steinitz Lemma, which is a statement about the rearrangement of a set of vectors. In this work, we develop a variation of the Steinitz Lemma that rearranges multiple sets simultaneously. We then demonstrate how our variation can be used to derive new results for block IPs. This is joint work with Timm Oertel and Robert Weismantel. |
|||||
March 14 | Amy Wiebe, SFU | Non-realizability of polytopes via linear programming |
|||
A classical question in polytope theory is whether an abstract polytope can be realized as a concrete convex object. Beyond dimension 3, there seems to be no concise answer to this question in general. In specific instances, answering the question in the negative is often done via “final polynomials” introduced by Bokowski and Sturmfels. This method involves finding a polynomial which, based on the structure of a polytope if realizable, must be simultaneously zero and positive, a clear contradiction. The search space for these polynomials is ideal of Grassmann-Plücker relations, which quickly becomes too large to efficiently search, and in most instances where this technique is used, additional assumptions on the structure of the desired polynomial are necessary. In this talk, I will describe how by changing the search space, we are able to use linear programming to exhaustively search for similar polynomial certificates of non-realizability without any assumed structure. We will see that, perhaps surprisingly, this elementary strategy yields results that are competitive with more elaborate alternatives and allows us to prove non-realizability of several interesting polytopes. |
|||||
February 28 | Manish Krishan-Lal and Gabriel Jarry-Bolduc, UBC-Okanagan | Feasibility-based structured nonconvex optimization (Krishan-Lal) / Positive bases with maximal cosine measure (Jarry-Bolduc) | |||
Feasibility-based structured nonconvex optimization: We present the onset of the theory to find closed form projection onto conics and quadrics in Hilbert spaces. These sets are crucial in many applications. We will focus on a feasibility-based framework that utilizes these sets, to train a neural network with projection methods.
|
|||||
February 14 | Maksym Neyra-Nesterenko, SFU | Stable, accurate and efficient deep neural networks for reconstruction of gradient-sparse images | |||
Deep learning has recently shown substantial potential to outperform standard methods in compressive imaging, an inverse problem where one reconstructs an image from highly undersampled measurements. Given compressive imaging arises in many important applications, such as medical imaging, the potential impact of deep learning is significant. Empirical results indicate deep learning achieves superior accuracy on data for such tasks. However, a theoretical treatment ensuring the stability of deep learning for compressive imaging is mostly absent in the current literature. In fact, many existing deep neural networks designed for these tasks are unstable and fail to generalize. In this talk, we present a novel construction of accurate, stable and efficient neural networks to reconstruct images under a gradient sparsity model. This is based on recent work by Adcock et al. (2021) and Antun et al. (2021). We construct the network by unrolling an optimization algorithm similar to NESTA. We utilize a compressed sensing analysis to prove accuracy and stability of the network. This shows that deep neural networks are capable of performing as well as state-of-the-art compressive imaging techniques for gradient-sparse images. A restart scheme enables exponential decay of the required network depth, yielding a shallower network. In turn this reduces computational costs, making the network feasible for fast image reconstruction. This is joint work with Ben Adcock. |
|||||
February 1 (2:30) | Annie Raymond, University of Massachusetts | Tropicalization of graph profiles | |||
The number of homomorphisms from a graph H to a graph G, denoted by hom(H; G), is the number of maps from V(H) to V(G) that yield a graph homomorphism, i.e., that map every edge of H to an edge of G. Given a fixed collection of finite simple graphs {H_1, ..., H_s}, the graph profile is the set of all vectors (hom(H_1; G), ..., hom(H_s; G)) as G varies over all graphs. Graph profiles essentially allow us to understand all polynomial inequalities in homomorphism numbers that are valid on all graphs. Profiles can be extremely complicated; for instance the full profile of any triple of connected graphs is not known. To simplify these objects, we introduce their tropicalization which we show is a closed convex cone that still captures interesting combinatorial information. We explicitly compute these tropicalizations for some sets of graphs, and relate the results to some questions in extremal graph theory. This is joint work with Greg Blekherman, Mohit Singh and Rekha Thomas. |
|||||
January 24 | Amir Ardestani-Jaafari, UBC-Okanagan | Improving Stroke Routing Protocol | |||
Currently, stroke patients are transported to the nearest stroke center. Many factors, including the spatial variation in population density, the stroke's severity, the time since stroke onset, and the congestion level at the receiving stroke center are not considered in the current protocol. We develop an analytical framework that enriches the stroke transport decision-making process by incorporating these factors. Applying our framework to the city of Montreal Stroke Network, we show that adopting a triage strategy leads to significantly improved health outcomes. | |||||
December 6 | Courtney Paquette, McGill University | SGD in the Large: Average-case Analysis, Asymptotics, and Stepsize Criticality | |||
In this talk, I will present a framework, inspired by random matrix theory, for analyzing the dynamics of stochastic gradient descent (SGD) when both the number of samples and dimensions are large. Using this new framework, we show that the dynamics of SGD on a least squares problem with random data becomes deterministic in the large sample and dimensional limit. Furthermore, the limiting dynamics are governed by a Volterra integral equation. This model predicts that SGD undergoes a phase transition at an explicitly given critical stepsize that ultimately affects its convergence rate, which we also verify experimentally. Finally, when input data is isotropic, we provide explicit expressions for the dynamics and average-case convergence rates. These rates show significant improvement over the worst-case complexities. | |||||
November 22 | Warren Hare, UBC-Okanagan | Image Reconstruction: Superiorization versus Regularization | |||
Image reconstruction plays an important role in a number of modern activities, such as experimental verification of radiotherapy. Based on joint work with M. Guenter, S. Collins, A. Ogilvy and A. Jirasek. |
|||||
November 8 | Sami Davies, IDEAL, Northwestern University |
Scheduling with Communication Delays |
|||
We study scheduling with precedence constraints and communication delays. Here, if two dependent jobs are scheduled on different machines, then c time units must pass between their executions. Previously, the best known approximation ratio was O(c), though an open problem in the top-10 list by Schuurman and Woeginger asks whether there exists a constant-factor approximation algorithm. We give a polynomial-time O(log c * log m)-approximation algorithm when given m identical machines and delay c for minimizing makespan. Our approach uses a Sherali-Adams lift of an LP relaxation and a clustering of the semimetric space induced by the lift. We also (1) obtain similar polylogarithmic approximations on related machines with the objective of minimizing the weighted sum of completion times and (2) found that if the delay can vary between pairs of dependent machines, then a constant factor approximation algorithm is not possible (assuming NP complete problems do not admit quasi-polynomial time algorithms). | |||||
October 25 | John Blake, Dalhousie University and Canadian Blood Services | Modelling Rare Blood in Canada | |||
BACKGROUND: Many countries maintain rare blood programs to provide access to blood for patients with complex serology. These include a process to screen donors and a registry to record information about rare donors. Blood agencies may also freeze rare blood. However, frozen blood storage is 2-10 times more expensive than liquid blood. STUDY DESIGN AND METHODS: A two-phase approach to analysis was used to evaluate how rare a blood type must be before a frozen inventory is necessary and what screening rates are required to support a rare blood program. A simulation model was employed to evaluate the impact of inventory on patient access. RESULTS: Results suggested that, for 24 of 29 named phenotypes managed by Canadian Blood Services, insufficient donors had been identified to ensure a stable inventory. Analytic results showed the screening rate necessary to ensure a stable inventory and the timeframe to build a rare donor base. 29 simulation scenarios were executed to evaluate patient access to rare blood against inventory levels. Simulation results show that some amount of frozen inventory is necessary for phenotypes rarer than 1 in 3,000. However, holding more than 2 units apiece of (O-, O+, A-, A+) did not improve patient access. CONCLUSION: While some level of frozen blood is needed for rare blood, large inventories do not improve access. Modest amounts of frozen inventory, combined with increased door screening, provides the greatest chance of maximizing patient access. |
2020-2021 Talks
May 13 | WCOM |
The West Coast Optimization Meeting |
April 15 | Gabriel Jarry-Bolduc, UBC-Okanagan |
Gradient and Hessian Approximation Techniques in Derivative-Free Optimization |
In this presentation, derivative-free approximation techniques to approximate the gradient and the Hessian are discussed. Error bounds for the determined and overdetermined cases are introduced. A novel Hessian approximation technique called the nested-set Hessian is provided. We discuss how to build a minimal poised set for nested-set Hessian computation and the relation between the nested-set Hessian and the Hessian of the quadratic interpolation. | ||
April 1 |
Rommel Regis, St. Joseph's University |
Radial Basis Function Interpolation and its Applications to Expensive Black-Box Optimization |
This talk will introduce radial basis function (RBF) interpolation, which is a method for interpolating scattered data in many dimensions. This method is typically used for approximating a multivariate function whose values are only known at a finite number of points. We will begin with some one-dimensional examples and then move on to higher dimensional cases. RBF interpolation is used in many applications such as in the numerical solution of partial differential equations, medical imaging, machine learning, statistics and optimization. In particular, I will give examples of algorithms that use RBFs to optimize expensive black-box functions. Such expensive functions are ubiquitous in practical applications including mechanical and aerospace design, hyperparameter tuning in deep neural networks, rover trajectory planning, and calibration of simulation models to real data. I will also provide some numerical results that demonstrate the effectiveness of RBFs in optimizing expensive black-box functions. | ||
4 March |
Alex Iananntuono, UBC-Okanagan | No-Collision Transportation Maps: Approximating the Optimal Transportation Map in O(n log n) Time |
Transportation maps between probability measures are critical objects in numerous areas of mathematics and applications such as PDE, fluid mechanics, geometry, machine learning, computer science, and economics. Given a pair of source and target measures, one searches for a map that has suitable properties and transports the source measure to the target one. Here, we study maps that possess the no-collision property; that is, particles simultaneously traveling from sources to targets in a unit time with uniform velocities do not collide. These maps are particularly relevant for applications in swarm control problems. We characterize these no-collision maps in terms of half-space preserving property and establish a direct connection between these maps and binary-space-partitioning (BSP) tree structures. Based on this characterization, we provide explicit BSP algorithms, of cost O(n log n), to construct no-collision maps. Moreover, interpreting these maps as approximations of optimal transportation maps, we find that they succeed in computing nearly optimal maps for q-Wasserstein metric (q = 1, 2). In some cases, our maps yield costs that are just a few percent off from being optimal. | ||
11 February |
Nicola Mulberry, Mathematics, SFU |
Modelling COVID-19 in British Columbia |
Over the past year, I have been a part of a team of researchers working on modeling COVID-19 directly with the BCCDC. In this seminar, I will discuss three out of the many projects this team has worked on: (1) A simple model with explicit social distancing that has been used to generate forecasts both provincially and federally; (2) A project in which we tried to use google mobility data to inform our weekly forecasts; and (3) an age-structured model that may help inform vaccine allocation strategies. All of the work I will discuss was a collaborative effort among researchers from SFU, UBC, the Department of Fisheries and Oceans, the Ministry of Health, and the BCCDC. | ||
10 December | Jagdeep Brar, UBC-Okanagan | |
26 November | Joe Viana, BI Norwegian Business School | Reflections on healthcare applications of hybrid simulation modelling in the UK and Norway |
The combination or mixing of methods in Operational Research (OR) is not new. The argument for combining OR methods is that as each has different strengths and weaknesses, and mixing methods offers the potential to overcome some of the drawbacks of using a single approach. Hybrid simulation (HS), defined as a modelling approach that combines two or more of the following methods: discrete-event simulation (DES), system dynamics (SD), and agent-based simulation (ABS), follows the same rationale for combining or mixing OR methods. It has been argued that HS are better suited to capture the complexity of systems that need to be better understood or improved. The use of HS and number of HS publications have increased over the last 20 years. The main HS application areas are healthcare, supply chain management and manufacturing, and most published models combine discrete-event simulation and system dynamics. I will introduce and reflect on three health related HS models 1) a DES-SD HS model of Chlamydia transmission and treatment in a UK county to assess effectiveness of screening programs, 2) a DES-SD-ABM HS model of age related macular degeneration, to understand and illustrate the implications to and interactions between the social care and health care systems in a UK county and, 3) a DES-ABM HS model to improve the operation of a post-term pregnancy outpatient clinic in a Norwegian hospital. | ||
12 November | Patricia Hersh, Department of Mathematics, University of Oregon | Posets arising as 1-skeleta of simple polytopes, the nonrevisiting path conjecture, and poset topology |
Joint seminar with the Discrete Math Seminar Given a polytope P and a nonzero vector c, the question of which point in P has largest inner product (or smallest inner product) with c is the main goal of linear programming in operations research. Key to efficiency questions regarding linear programming is the directed graph G(P,c) on the 1-skeleton of P obtained by directing each edge e(u,v) from u to v for c(u) < c(v) and in particular the diameter of G(P,c). We will explore the question of finding sufficient conditions on P and c to guarantee that no directed path ever revisits any polytope face that it has left; this is enough to ensure that linear programming is efficient under all possible choices of pivot rule. It turns out that poset-theoretic techniques and poset topology can help shed light on this question. In discussing work on this question, we will provide background along the way. |
||
29 October | Fermin Mallor & Daniel Garcia de Vicuña, Public University of Navarre, Spain | Operations Research and Simulation Models Help Health Managers Planning (Scarce) Health Resources |
In this talk, we present a discrete event simulation model to support the decision-making concerned with the short-term planning of the necessary hospital resources, especially Intensive Care Unit (ICU) beds, to face outbreaks, as the SARS-CoV-2. The two main components of the simulation model are the stochastic modeling of the admission of new patients and the patient flow through the hospital facilities. For the patient arrival process, we analyze different models based on population growth curves of the most affected countries during the first wave and propose the use of the Gompertz curve. The length of stay is divided into several stages, each one modeled separately. Being used as a short-term forecasting tool, the simulation model requires an accurate representation of the current system state and high fidelity in mimicking the system dynamics from that state. We also report the use of this simulation model during the COVID-19 outbreak in the Autonomous Community of Navarre, in Spain. Every day, during the first pandemic wave, the research team informed the regional logistic team in charge of planning the health resources, who programmed the ward and ICU beds based on the resulting predictions. In this second wave, health managers are directly using the simulation model under the supervision of our research group. |
||
22 October | Amzi Jeffs, Department of Mathematics, University of Washington | Applying Sunflowers of Convex Sets to the Study of Neural Codes |
Motivated by neuroscience phenomena, the study of convex neural codes seeks to characterize how convex sets may intersect and overlap one another in Euclidean space. We will see how certain arrangements of convex sets, called "sunflowers" and "flexible sunflowers", can help answer this question. In particular, we will describe a new Helly-type theorem that constrains these arrangements, and highlight several surprising applications to the theory of convex neural codes. We will briefly contextualize these results by examining minors of codes, a framework that is somewhat analogous to minors of graphs. | ||
15 October | Hui Ouyang, UBC-Okanagan | On Angles Between Convex Sets in Hilbert Spaces |
This talk is based on one recent work on angles between convex sets by Bauschke, Wang and Ouyang. The notion of the angle between two linear subspaces has a long history, dating back to Friedrichs’s work in 1937 and Dixmier’s work on the minimal angle in 1949. In 2006, Deutsch and Hundal studied extensions to convex sets in order to analyze convergence rates for the cyclic projections algorithm. In this talk, we present existence of and necessary conditions for one pair of principal vectors. We also specify equivalent expressions of the cosine of minimal angles and characterize when the minimal angle of two convex cones is positive. In addition, we provide sufficient conditions for the closedness of the sum of two convex cones. At last, we show generalizations of results by Deutsch and Hundal on angles and minimal angles from linear subspaces to general cones. Various examples are used to illustrate the sharpness of our results. |
||
17 September | Warren Hare, UBC-Okanagan | Variational Analysis and Derivative-Free Optimization |
Variational Analysis studies Mathematical objects under small variations. With regards to optimization, these objects are typified by representations of first-order or second-order information (gradients, subgradients, Hessians, etc). On the other hand, Derivative-Free Optimization studies algorithms for continuous opti |
2019-2020 Talks
Date | Speaker | Title & Abstract |
26 March |
Warren Hare, UBC-Okanagan |
Title & Abstract: TBA Postponed due to COVID-19 pandemic. |
12 March |
Rommel Regis, St Joseph's University | Title & Abstract: TBA Postponed due to COVID-19 pandemic. |
27 February | Amir Ardestani-Jaafari, UBC-Okanagan | Linearized Robust Counterparts of Two-stage Robust Optimization Problem In many decision-making problems, some decisions must be made here-and-now before the realization of actual data. However, other decisions are of a wait-and-see type, i.e., they can wait until uncertain data is revealed. It is a computationally challenging task to make wait-and-see decisions fully adaptable while the problem remains computationally tractable. We propose a new tractable method that can be applied to a class of two-stage optimization problems under uncertainty. The method mainly relies on a linearization scheme employed in bilinear optimization problems, therefore it gives rise to the “linearized robust counterpart” models. We identify a close relation between this linearized robust counterpart model and the popular affinely adjustable robust counterpart model. We also describe methods of modifying both types of models to make these approximations less conservative. These methods are heavily inspired by the use of valid linear and conic inequality in the linearization process for bilinear models. We finally demonstrate the potential of our method in operations management applications.
|
13 February Special room: ASB 10940 |
Yves Lucet, UBC-Okanagan | CCA 2D: a computational convex analysis library for bivariate functions Critical to any optimization problem, duality can be expressed using Fenchel's duality Theorem, which is formulated using the convex conjugate. Computing such a conjugate and other fundamental convex transforms is the goal of computational convex analysis (CCA). In this talk, we will look at the upcoming public release of CCA 2D, a numerical library for computing with bivariate piecewise linear-quadratic functions. We will present optimal data structures for convexity checking, conjugacy, and addition. How best to convert between the different function representations will also be explained. The talk will finish with remaining challenges and open problems. |
30 January | Paul Tupper, Dept of Math SFU | Game Theoretic Models of Clear Versus Plain Speech Clear speech is a vocal style used when a speaker wishes to improve comprehension, usually due to the presence of external noise, less-than-optimal listening conditions, or when they are simply instructed to speak clearly. Clear speech has many distinguishing features, including increased duration, pitch, and amplitude, as well as the exaggeration of articulatory movement. We use game theory to model the phenomenon of clear speech, and make predictions of how it changes under different circumstances. We view the behaviours of speakers and hearers when communicating as optimal strategies in communication games. When comprehension becomes more difficult, the |
23 January | Jagdeep Brar, UBC-Okanagan | Optimization of expected returns per unit risk Portfolio optimization is the process of choosing the best portfolio (optimal investment decision across a set of financial instruments or assets) out of the possible set of portfolios. The optimal portfolio should maximize the expected return and/or minimize the financial risk. Investors want to maximize their returns, but higher expected |
21 November | Yankai Cao, Chemical & Biological Engineering, UBC |
Large-Scale Optimization: Algorithms and Applications This talk presents my recent work on algorithms and software implementations to solve nonlinear stochastic programming problems to local and global optimality. Our algorithms exploit emerging high-performance computing hardware (e.g., multi-core CPUs, GPUs, and computing clusters) to achieve computational scalability. We are currently using our capabilities to address engineering and scientific questions that arise in diverse application domains including control of wind turbines, power management in large networks, and parameter inference in microbial community models. The problems that we are addressing are of unprecedented complexity and defy the state-of-the-art. For example, the problem of designing a control system for wind turbines is a nonlinear programming problem (NLP) with 7.5 million variables that takes days to solve with existing solvers. |
7 November | Lloyd Elliot Department of Statistics & Actuarial Science, SFU |
Random Tessellation Forests Space partitioning methods such as random forests and the Mondrian process are powerful machine learning methods for multi-dimensional and relational data, and are based on recursively cutting a domain. The flexibility of these methods is often limited by the requirement that the cuts be axis aligned. The Ostomachion process and the self-consistent binary space partitioning-tree process were recently introduced as generalizations of the Mondrian process for space partitioning with non-axis aligned cuts in the two dimensional plane. Motivated by the need for a multi-dimensional partitioning tree with non-axis aligned cuts, we propose the Random Tessellation Process (RTP), a framework that includes the Mondrian process and the binary space partitioning-tree process as special cases. We derive a sequential Monte Carlo algorithm for inference, and provide random forest methods. Our process is self-consistent and can relax axis-aligned constraints, allowing complex inter-dimensional dependence to be captured. We present a simulation study, and analyse gene expression data of brain tissue, showing improved accuracies over other methods. |
31 October | Chen Feng School of Engineering, UBC Okanagan |
Mathematical Foundation of Blockchain Technology Blockchain technology has recently received much attention. In addition to serving as a digital platform for Bitcoin and other cryptocurrencies, it has the potential to transform various industries including finance, healthcare, logistics, and public administration. Despite its great potential, blockchain technology is still in an early stage and is sometimes misunderstood by the general public. In this talk, we will explain some mathematical foundation of this emerging technology, with a particular focus on the mathematical modelling and analysis of Bitcoin. This talk is self-contained and no background in Bitcoin and blockchains will be assumed. |
17 October | Steven Shechter Sauder School of Business, UBC |
Bayes-Adaptive Treatment Plans: The Case of Chronic Kidney Disease We consider the problem of designing ongoing treatment plans for a population with heterogeneity in disease progression and response to medical interventions. We create a model that partially learns the patient type by monitoring the patient health over time and updates a patient's treatment plan accordingly. We formulate the problem as a multivariate state-space, partially observable Markov decision process (POMDP) and provide structural properties of the value function, as well as the optimal policy. We extend this modeling framework to a general class of treatment initiation problems where there is a stochastic lead-time before a treatment becomes available or effective. As a case study, we develop a data-driven, decision-analytic model to study the optimal timing of vascular access surgery for patients with progressive chronic kidney disease, and we establish policies that consider a patient's rate of disease progression in addition to the kidney health state. To circumvent the curse of dimensionality of the POMDP, we develop several approximate policies and evaluate them against a lower bound. Through a numerical study and several sensitivity analyses, we establish that our model-based recommendations perform well, and we provide further policy insights that sharpen existing guidelines. |
3 October | Nick Dexter Math, SFU |
On the gap between theory and practice in deep learning Deep learning (DL) is transforming whole industries as complicated decision-making processes are being automated by neural networks trained on real-world data. Yet as these tools are increasingly being applied to critical problems in medicine, science, and engineering, many important questions about their stability, reliability, and approximation capabilities remain. Such questions include: how many data points are sufficient to train a neural network on simple approximation tasks and how robust are these trained architectures to unknown sources of noise? Numerous studies have demonstrated that neural network models trained on image classification tasks are vulnerable to misclassification when provided images corrupted by so-called adversarial perturbations of the data, a special form of additive noise designed by exploiting the trained model s own weights. These observations point to an inherent instability, either in the training process or the architectures themselves. In this work we seek to quantify the approximation capabilities of deep neural networks (DNNs), the core architectures driving much of the recent successes of DL. Recently published theoretical results on approximation with DNNs show that these architectures allow for the same convergence rates as best-in-class schemes, including h,p-adaptive finite element approximations, in norms relevant to PDE problems. Indeed, our own analysis confirms that DNNs afford the same sample complexity estimates as compressed sensing (CS) on sparse polynomial approximation problems. To explore the approximation capabilities of DNNs, we present numerical experiments on a series of simple tests in high-dimensional function approximation, with comparisons to results achieved with CS on the same problems. In contrast to the theory, our numerical experiments show that current methods of training often yield DNNs which fail to achieve the theoretical error estimates, exposing a critical disconnect between the generalization potential of DNNs predicted by the theory and real-world performance of trained DNNs. |
2018-2019
Date | Speaker | Title & Abstract |
---|---|---|
August 2nd *Friday* |
Michelle Spencer |
Formulating Quadratic Traveling Salesman Problems for Computation |
August 1st *11:00 a.m.* |
Zsolt Saffer Institute of Statistics and Mathematical Methods in Economics TU Wein |
Probabilistic capacity control of queues Abstract: This talk is about controlling the capacity of a queueing system. Capacity of a queueing system can be modeled on different ways, like number of servers or service rate. We review several different ways of controlling such capacity, like making the capacity or the change of capacity dependent on the system state, like e.g. the number of customers in the system. In this talk we concentrate rather on the second way, in which the capacity is allowed to be increased and decreased by a fixed value at each customer service completion. We present the analysis of two queueing models with such controllable capacity. In the first model, the number of active servers can be controlled by means of probabilities specifying the dependency of the number of active servers on the actual number of customers and the number of active servers. The service time is constant and the concurrently served customers are served in synchronized manner. The active number of servers can be incremented, decremented or kept unchanged at the ends of service time according to the given probabilities. The system is a loss system, i.e. it has no buffer for long-term customer waiting. We provide explicit form results for the joint and marginal distributions of the number of servers and the number of customers on PGF level as well as expressions of the most important system measures. The second queueing analysis considers an M/M/1 queue, in which the customer service rate is allowed to be increased and decreased by a fixed value at each customer service completion. These changes in service rate are controlled by probabilities depending on the actual number of customers and the actual service rate. We establish a methodology which utilizes the specific structure of the model. This methodology inherits some element from the stationary analysis of the standard QBD model and provides a first order, forward algorithm for computing the stationary probability vectors of the number of customers in the system. We derive also the vector probability generating function and the vector mean of the stationary number of customers. Such queueing models with controllable capacity can be applied e.g. in intelligent transportation systems or manufacturing systems, in which the processing speed follows the processing demand. |
Tuesday, June 4th Joint O.R. and Discrete Math Seminar *Burnaby* *1:30* *AQ K9509* |
Ahmad Abdi Tepper School of Business Carnegie Mellon University and Department of Mathematics London School of Economics |
Ideal clutters and k-wise intersecting families Abstract: A clutter is *ideal* if the corresponding set covering polyhedron has no fractional vertices, and it is *k-wise intersecting* if the members don’t have a common element but every k members do. We conjecture that there is a constant k such that every k-wise intersecting clutter is non-ideal. I will show how this conjecture for k=4 would be an extension of Jaeger’s 8-flow theorem, and how a variation of the conjecture for k=3 would be an extension of Tutte’s 4-flow conjecture. I will also discuss connections to tangential 2-blocks, binary projective geometries, the sums of circuits property, etc. |
April 25th |
Jean-Philippe Richard Industrial and Systems Engineering University of Minnesota |
On the Convexification of Permutation-invariant Sets arising in MINLP and Data Problems Abstract: Permutation-invariant sets - sets that do not change when the variables are permuted - appear in a variety of optimization problems, including sparse principal component analysis. Solving permutation-invariant optimization problems to global optimality requires the derivation of strong convex relaxations for their feasible region. In this talk, we study how to construct the convex hull of permutation-invariant sets in a higher dimensional space. When projection is possible, we also obtain convex hull descriptions in the original space of variables. We illustrate our techniques by developing (1) a novel reformulation and relaxation for sparse principal component analysis, (2) convex hull of matrices with bounded rank and spectral norms, (3) convex envelopes of multi-linear functions over certain domains, and (4) linear descriptions of logical and sparsity constraints arising in the formulation of 0-1 mixed integer programs. This is joint work with Mohit Tawarmalani (Purdue) and Jinhak Kim (University of South Alabama). |
April 5th *Friday, 1:30* *RCB 6125*(Burnaby) |
Liam Erdos, Ben Gregson, Shane Jace and Martin Zhu Simon Fraser University |
Math 402W Operations Research Clinic project presentation Coordinating Primary Care Operating Hours to Reduce Acute Care Visits |
February 28th *Thursday* *9:30-12:00* *SUR 3040* |
Xiaorui Li Ph.D. thesis defence Senior Supervisor: Z. Lu |
Sparse and Low Rank Approximation via Partial Regularization: Models, Theory and Algorithms Abstract: Sparse representation and low-rank approximation are fundamental tools in fields of signal processing and pattern analysis. In this thesis, we consider introducing some partial regularizers to these problems in order to neutralize the bias incurred by some large entries (in magnitude) of the associated vector or some large singular values of the associated matrix. In particular, we first consider a class of constrained optimization problems whose constraints involve a cardinality or rank constraint. Under some suitable assumptions, we show that the penalty formulation based on a partial regularization is an exact reformulation of the original problem in the sense that they both share the same global minimizers. We also show that a local minimizer of the original problem is that of the penalty reformulation. Specifically, we propose a class of models with partial regularization for recovering a sparse solution of a linear system. We then study some theoretical properties of these models including existence of optimal solutions, sparsity inducing, local or global recovery and stable recovery. In addition, numerical algorithms are proposed for solving those models, in which each subproblem is solved by a nonmonotone proximal gradient (NPG) method. Despite the complication of the partial regularizers, we show that each proximal subproblem in NPG can be solved as a certain number of one-dimensional optimization problems, which usually have a closed-form solution. The global convergence of these methods are also established. Finally, we compare the performance of our approach with some existing approaches on both randomly generated and real-life instances, and report some promising computational results. |
PIMS - SFU Seminar December 6th *SUR 5380* |
Asia Ivic Weiss York University |
Regular Polyhedra, Polytopes and Beyond Abstract: In this talk we summarize the classification of regular polyhedra and polytopes and extend the concept to that of hypertope: a thin, residually connected incidence geometry. We present the characterization of the automorphism groups of regular hypertopes and overview recent results on classification of toroidal hypertopes. |
October 6th *Saturday, 8:30-4:00* *Kelowna* |
WCOM Hosted by UBC Okanagan |
Details of the Fall 2018 West Coast Optimization Meeting available from the hosts. |
PIMS - SFU Seminar October 4th |
Jong-Shi Pang Daniel J. Epstein Department of Industrial and Systems Engineering University of Southern California |
Non-problems in Optimization for Statistics Abstract: The phrase "non-problems" in the title refers to a collection of adjectives that start with the prefix "non". These include "non-convex", "non-differentiable", "non-traditional", "non-trivial", and "non-stochastic gradient" (as a counter to a topical research theme), all in the context of optimization for statistics. Outside a stand-alone optimization problem, the phrase could include "non-cooperative" game theory as this is an area where there are significant research opportunities when combined with the other "non-subjects". I will present a variety of these non-problems and give a brief summary of my research on them. |
2017-2018
July 12th *Thursday* *2:00-4:30* *SUR 2980* |
Pooja Pandey Ph.D. thesis defence Senior Supervisor: A. Punnen |
Topics in Quadratic Binary Optimization Problems Abstract: In this dissertation, we consider the quadratic combinatorial optimization problem (QCOP) and its variations: the generalized vertex cover problem (GVC), the quadratic unconstrained binary optimization problem (QUBO), and the quadratic set covering problem (QSCP). We study these problems as discussed below. For QCOP, we analyze equivalent representations of the pair (Q, c), where Q is a quadratic cost matrix and c is a linear cost vector. We present various procedures to obtain equivalent representations, and study the effect of equivalent representations on the computation time to obtain an optimal solution, on the quality of the lower bound values obtained by various lower bounding schemes, and on the quality of the heuristic solution. The first variation of QCOP is GVC, and we show that GVC is equivalent to QUBO and also equivalent to some other variations of GVC. Some solvable cases are identified and approximation algorithms are suggested for special cases. We also study GVC on bipartite graphs. QUBO is the second variation of QCOP. For QUBO, we analyze several heuristic algorithms using domination analysis. We show that for QUBO, if any algorithm that guarantees a solution no worse than the average, has a domination ratio of at least 1/40. We extend this result to the maximum and minimum cut problems; maximum and minimum uncut problems; and GVC. We also study the QUBO when Q is: 1) (2d + 1)-diagonal matrix, 2) (2d + 1)-reverse-diagonal matrix, and 3) (2d+1)-cross diagonal matrix, and show that these cases are solvable in polynomial time when d is fixed or is of O(log n). The last variation of QCOP is QSCP. For QSCP, we identify various inaccuracies in the paper by R. R. Saxena and S. R. Arora, A linearization technique for solving the Quadratic Set Covering Problem, Optimization, 39 (1997) 33-42. In particular, we observe that their algorithm does not guarantee optimality, contrary to what is claimed. We also present the mixed integer linear programming formulations (MILP) for QSCP. We compare the lower bounds obtained by the linear programming relaxations of MILPs, and study the effect of equivalent representations of QSCP on these MILPs. |
July 12th *Thursday* *10:00-12:30* *SUR 2710* |
Brad Woods Ph.D. thesis defence Senior Supervisor: A. Punnen |
The Quadratic Travelling Salesman Problem: Complexity and Approximation Abstract: In this thesis we study the Quadratic Travelling Salesman Problem (QTSP) which generalizes the well-studied Traveling Salesman Problem and several of its variations. QTSP is to find a least cost Hamiltonian cycle in an edge-weighted graph, where costs are defined for all pairs of edges contained in the Hamilton cycle. This is a more general version than the one that appears in the literature as the QTSP, denoted here as the adjacent quadratic TSP, which only considers costs for pairs of adjacent edges. The fixed-rank QTSP is introduced as a restricted version of the QTSP where the cost matrix has fixed rank p. When c = 0, it is referred to as the homogenous rank p QTSP. We study QTSP by examining the complexity of searching exponential neighbourhoods for QTSP, the fixed-rank QTSP and the adjacent quadratic TSP. |
May 10th *2:30* via COCANA (Kelowna) |
Björn Rüffer University of Newcastle (Australia) |
Lyapunov Functions and Optimization Further details available from the COCANA Website. |
May 5th *Saturday, 8:30-4:00* *Seattle* |
WCOM |
Details of the Spring 2018 West Coast Optimization Meeting held at UW Seattle are here. |
Apr. 20th *Friday* *4:30-5:30* *SUR 2710* |
Jingxin Guo M.Sc. project presentation Senior Supervisor: A. Punnen |
The Unrestricted Linear Fractional Assignment Problem Abstract: The linear fractional assignment problem (LFAP) is a well-studied combinatorial optimization problem with various applications. It attempts to minimize ratio of two linear functions subject to standard assignment constraints. When the denominator of the objective function is positive, LFAP is solvable in polynomial time. However, when the denominator of the objective function is allowed to take positive and negative values, LFAP is known to be NP-hard. In this thesis, we present two 0-1 programming formulations of LFAP and compare their relative merits using experimental analysis. We also show that LFAP can be solved as a polynomial bounded sequence of constrained assignment problems. Experimental results using this methodology are also given. Finally, some local search heuristics are developed and analyzed their efficiency using systematic experimental analysis. Our algorithms are able to solve reasonably large size problems within acceptable computational time. |
Apr. 10th *11:30* *SUR 2980* |
Negin Bolkhanian and Matthew Reyers Simon Fraser University |
Math 402W Operations Research Clinic project presentation The Walking School Bus Routing Problem |
Apr. 10th *10:30* *SUR 2980* |
Alan Bi, Trevor Dallow and Samantha Zimmerman Simon Fraser University |
Math 402W Operations Research Clinic project presentation Scheduling of Nurses at Raven Song Community Health Care Centre |
Apr. 5th via COCANA (Kelowna) |
Yves Lucet UBC Okanagan |
On the convexity of piecewise-defined functions Further details available from the COCANA Website. |
Mar. 15th |
Sandy Rutherford Complex Systems Modelling Group Simon Fraser University |
Control of an HIV Epidemic among Injection Drug Users: Simulation Modeling on Complex Networks Abstract: HIV remains a serious public health problem in many marginalized communities. We develop a network model of the HIV epidemic affecting injection drug users and female sex workers in the Downtown Eastside neighborhood of Vancouver, Canada, calibrated using data from public health surveillance and cohort studies. Many HIV positive individuals are unaware of their status and strategies for testing are an important part of HIV response programs. Upon diagnosis, HIV patients enter a continuum of care, involving both engagement and retention in treatment. We explored potential epidemic control strategies through simulation: reduced syringe sharing during injection drug use, reduced time to diagnosis, reduced time to initiation of treatment following diagnosis, and improved retention in treatment. We find that syringe sharing, HIV testing, and retention in treatment significantly impact HIV prevalence. Close connections between syringe sharing and sexual networks deserve attention as important avenues for rapid HIV transmission. |
Mar. 8th via COCANA (Kelowna) |
Tim Hoheisel McGill University |
Epi-convergent Smoothing with Applications to Convex-Composite Functions Further details available from the COCANA Website. |
Mar. 1st SUR 3040 |
Winfried Grassmann Computer Science University of Saskatchewan |
Discrete Event Systems as Basis of Discrete-time Queues, Including Multiserver Queues and Tandem Queues Abstract: A number of papers have appeared describing how to calculate the distribution of the number of elements in a discrete-time GI/G/1 queue in a statistical equilibrium. Essentially, the process is converted to a discrete-time Markov chain and solved as such. It turns out that this method also works for discrete-time discrete event systems in a statistical equilibrium. They, too, can be converted into discrete time Markov chains, and solved. The problem is that due to the curse of dimensionality, the number of states is huge, and to solve the equilibrium equations, iterative methods should be used, such as Gauss-Seidel. Still, every effort must be made to reduce the dimensionality of the problem as much as possible. One way to do this is to embed the process at the points in time where events occur. Using this method, we could solve systems of reasonable size on our laptop, including GI/G/c queues and tandem queues. For some models, further reductions in the dimensionality are possible. One example is the applications of the distributional Little’s law to the GI/G/1 queue. Possible extensions of this method are discussed. |
Feb. 22nd *SUR 3040* |
David Stanford Statistics and Actuarial Science Western (Ontario) University |
The Affine Accumulating Priority Queue: a model which prioritises based upon acuity and waiting time Abstract: Previous accumulating priority queues (APQ) in the literature have assumed that all arriving customers accumulate priority credits over time starting from an initial value of 0. The affine APQ model introduces a new element in terms of an initial class-dependent credit level, from which the accumulated priority grows linearly over time. In this presentation, we consider a two-class APQ, for which class-1 customers receive a positive initial credit upon arrival. (Without loss of generality the class-2 customers continue to have an initial credit of 0.) We assess the impact of the initial priority score upon the waiting time distribution for the lower class of customers, and insights regarding the higher priority class waiting time distribution as well. Numerical examples will be presented to illustrate the trends we observe. This is joint work with Maryam Mojalal, Richard Caron, Peter Taylor and Ilze Ziedins. |
Jan. 25th via COCANA (Kelowna) *SUR 2746* |
Paulo J. S. Silva Unicamp |
Using the Spectral Proximal Gradient Method to Decompose a Matrix as the Sum of a Sparse Plus a Low-rank Term Further details available from the COCANA Website. |
Dec. 11th *Monday* *9:00-10:30* *SUR 2710* |
Bolong He M.Sc. project presentation Senior Supervisor: T. Stephen |
Analysis of Firefighter Absences and Hiring Schedule Optimization at the Surrey Fire Department Abstract: We study staffing issues at the Surrey Fire Department with a view to understanding and optimizing the annual hiring cycle for full-time firefighters. This project begins with a discussion of a previous model used by the Fire Department which predicts absences based on seasonally adjusted historical data and then optimizes the hiring cycle based on a simulation. We extend the analysis of the data to include the age cohort as a variable and compare short-term and long-term absences. We then use time series to predict future absences and use these predictions along with additional constraints to optimize the hiring schedule. |
Dec. 4th *Monday* *10:30-1:00* *SUR 3040* |
Timothy Yusun Ph.D. thesis defence Senior Supervisor: T. Stephen |
On the Circuit Diameters of Polyhedra Abstract: We develop a framework to study the circuit diameters of polyhedra. The circuit diameter is a generalization of the combinatorial (edge) diameter, where walks are permitted to enter the interior of the polyhedron as long as steps are parallel to its circuit directions. Because the circuit diameter is dependent on the specific realization of the polyhedron, many of the techniques used in the edge case do not transfer easily. We reformulate circuit analogues of the Hirsch conjecture, the d-step conjecture, and the nonrevisiting conjecture, recovering the edge case equivalences in the circuit case. To do this we adapt the notion of simplicity to work with circuit diameter, and so we define C-simplicity and wedge-simplicity. Then, we prove the circuit 4-step conjecture, including for unbounded polyhedra, by showing that the original counterexample U4 to the combinatorial analogue satisfies the Hirsch bound in the circuit case, independent of its realization. This was the first known counterexample to Hirsch, and several families of counterexamples are constructed from U4. In particular, the unbounded Hirsch conjecture could still hold in the circuit case. We also use computational methods to study Q4, the bounded counterpart to U4, and give two realizations with different circuit diameters. It remains open whether Q4 is circuit Hirsch-sharp; however, we are able to lower the distance bound for at least one direction between the two far vertices of Q4. Finally, we present some auxiliary results involving representations of polyhedra and circuit calculations. |
Oct. 26th via COCANA (Kelowna) |
Hamid Afshari University of Manitoba andUBC Okanagan |
Improving the Resilience of Energy Flow Exchanges in Eco-Industrial Parks: Optimization Under Uncertainty Further details available from the COCANA Website. |
Sept. 28th via COCANA (Kelowna) |
Gord Lovegrove UBC Okanagan |
Models of Bicycle Rider Perceptions Related to Safety and Comfort Further details available from the COCANA Website. |
Sept. 16th *Saturday, 8:30-4:00* *2740* |
WCOM |
Details of the Fall 2017 West Coast Optimization Meeting here. |
Sept. 14th via COCANA (Kelowna) |
Scott Lindstrom University of Newcastle (Australia) |
Douglas-Rachford Method for Non-Convex Feasibility Problems Further details available from the COCANA Website. |
2016-2017
July 6th *3:00-4:30* *SUR 2710* |
Jingbo Tian M.Sc. thesis defense Senior Supervisor: A. Punnen |
The Multiplicative Assignment Problem Abstract: The quadratic assignment problem (QAP) is an extensively studied combinatorial optimization problem. The special case of QAP where the cost matrix is of rank one is called the multiplicative assignment problem (MAP). MAP is not well studied in literature, particularly in terms of experimental analysis of algorithms. In this thesis we present some mixed integer linear programming formulations and compare their selective strength using experimental analysis. We also present exact and heuristic algorithms to solve MAP. Our heuristic algorithms include improvements in existing FPTAs, as well as local search and tabu search enhancements. Results of extensive experimental analyses are also reported. |
May 22nd-25th | No seminar | The 2017 SIAM Conference on Optimization takes place in Vancouver. |
May 11th v/ COCANA (Kelowna) *3:00* |
Frank Maurer Computer Science University of Calgary |
Immersive Analytics Further details available from the COCANA Website. |
Apr. 7th *Friday* *2:30-4:00* *SUR 2980* |
Olga Zasenko M.Sc. thesis defense Senior Supervisor: T. Stephen |
Algorithms for Colourful Simplicial Depth and Median in the Plane Abstract: The colourful simplicial depth (CSD) of a point x in R2relative to a configuration P=(P1, P2, ..., Pk) of n points in k colour classes is exactly the number of closed simplices (triangles) with vertices from 3 different colour classes that contain x in their convex hull. We consider the problems of efficiently computing the colourful simplicial depth of a point x, and of finding a point in R2, called a median, that maximizes colourful simplicial depth. For computing the colourful simplicial depth of x, our algorithm runs in time O(n log(n) + kn) in general, and O(kn) if the points are sorted around x. For finding the colourful median, we get a time of O(n4). For comparison, the running times of the best known algorithm for the monochrome version of these problems are O(n log(n)) in general, improving to O(n) if the points are sorted around x for monochrome depth, and O(n4) for finding a monochrome median. |
Apr. 6th *11:30* *SUR 2980* |
R. Jiang, W. Lu, L.T.S. Siu, T. Tong and D. Yin Simon Fraser University |
Math 402W Operations Research Clinic project presentation School Infrastructure Planning and Catchment Rezoning: Alleviating Surrey Enrollment Pressure for 2020-2024 |
Feb. 2nd via COCANA (Kelowna) |
Majid Jaberipour UBC Okanagan |
Derivative-Free methods for Solving Unconstrained and Constrained Optimization Problems Further details available from the COCANA Website. |
Jan. 26th | Songzi Du Department of Economics Simon Fraser University |
Robust Mechanisms Under Common Valuation Abstract: We study robust mechanisms to sell a common-value good. We assume that the mechanism designer knows the prior distribution of the buyers’ common value but is unsure of the buyers’ information structure about the common value. We use linear programming duality to derive mechanisms that guarantee a good expected revenue for all information structures and all equilibria. Our mechanism maximizes the revenue guarantee when there is one buyer. As the number of buyers tends to infinity, the revenue guarantee of our mechanism converges to the full surplus. We numerically demonstrate that the revenue guarantees of our mechanisms are generally close to optimal when there are two buyers. |
Jan. 12th via COCANA (Kelowna) |
Honglin Luo Chongqing Normal University |
SDP relaxation and rank-1 decomposition for maximin 1-dispersion problems Further details available from the COCANA Website. |
Dec. 14th *12:00* *SUR 2710* |
Pooja Pandey Ph.D. thesis proposal presentation Senior Supervisor: A. Punnen |
The quadratic set covering problems and its variations Abstract: The set covering problem is a well studied combinatorial problem with numerous applications. In this thesis we primarily consider set covering problems with quadratic objective function (QSCP). QSCP has many applications such as the location of access points in a wireless network, the facility layout problem, or line planning in public transports. QSCP is known to be NP-hard and there are not many studies on this interesting problem. Some of the previous studies include: the polynomial approximation properties of the QSCP and approximation hardness results, and cutting-plane algorithm. In this thesis we plan to study various linearization techniques integrating the ideas from classical linearization techniques, McCormick envelopes, binary and decimal expansion of integers etc. We also plan to perform the theoretical analysis of the strengths of their LP relaxations. For the computational experiments we will use general purpose mixed integer programming solver (CPLEX) and do the experimental comparison of these formulations. Since not many benchmark problems are available for QSCP, for computational task we will develop benchmark test problems. Another direction of investigation is on efficient heuristic algorithms and systematic experimentation comparing various strategies using our benchmark problems. We will compare our heuristics with existing heuristics for QSCP from literature. We also plan to develop exact algorithms based on branch- and-cut strategies using valid inequalities and effective branching strategies. The exact algorithms we develop will be compared with general purpose commercial codes. Finally, several variations and special cases will be examined from theoretical as well as experimental point of view. These include the generalized vertex cover problem, the quadratic vertex cover problem, and the bottleneck quadratic set cover problem. |
Dec. 14th *10:30* *SUR 2710* |
Brad Woods Ph.D. thesis proposal presentation Senior Supervisor: A. Punnen |
The quadratic travelling salesman problem: complexity, approximations, and exponential neighbourhoods Abstract: The (linear) travelling salesman problem (TSP) is to find a least cost Hamilton cycle in an edge weighted graph. It is one of the most widely studied hard combinatorial optimization problems, and has been used to model a wide variety of applications. In this thesis, a proper generalization of TSP, referred to as the quadratic travelling salesman problem (QTSP), is considered. Here, costs are incurred for every pair of edges belonging to a tour. A special case has been studied by various authors recently, which only considers pairs of adjacent edges (QTSP(A)). Since QTSP contains TSP as a special case, it is clearly NP-hard. For any NP-hard problem it is natural to investigate the conditions under which one can solve the problem in polynomial time, examine the approximability, and also to identify exponential neighbourhoods which can be searched efficiently. We begin by considering the polynomially solvable special cases for the linear TSP. QTSP is found to be sufficiently complex to remain intractable, and this motivates restrictions on the QTSP objective function. In addition to the general QTSP, we will consider two restrictions, QTSP(A) and the rank p QTSP (QTSP(p)), which restricts the rank of the quadratic cost matrix to some fixed number p. We will discuss the computational complexity of these problems, develop pseudopolynomial algorithms and develop fully polynomial time approximation schemes wherever appropriate. This includes specially structured graphs and special classes of tours. The behaviour of special cases of QTSP(A) and QTSP(p) are different, depending on their structural properties. Our work will focus on exploiting these structure in algorithm development. We will also consider the QTSP linearization problem, which extends the work of Kabadi and Punnen on the QAP linearization problem. We will give necessary and sufficient conditions for identifying when an instance of QTSP is linearizable. |
Dec. 8th *SUR 2746* via COCANA (Kelowna) |
Jonathan Eckstein Management Science & Information Systems Rutgers University |
Asynchronous Projective Splitting for Convex Optimization and Monotone Inclusion Problems Further details available from the COCANA Website. |
Nov. 24th |
Steffen Borgwardt University of Colorado, Denver |
Operations Research in Land Exchange Abstract: With geometric modeling techniques, one can represent the feasible solutions of problems in operations research as objects in high-dimensional space. The properties of these objects reveal information about the underlying problems and lead to algorithms. We model an application in land exchange as a clustering problem where the clusters have to adhere to prescribed cluster sizes. In this approach, we connect least-squares assignments, cell complexes, and the studies of polyhedra. Further, we report on how these results were implemented in practice. The devised methods lead to various tools for more general questions on big data. |
Nov. 17th | Zirui Zhou Simon Fraser University |
A Unified Approach to Error Bounds for Structured Convex Optimization Problems Abstract: Error bounds, which refer to inequalities that bound the distance of vectors in a test set to a given set by a residual function, have proven to be extremely useful in analyzing the convergence rates of a host of iterative methods for solving optimization problems. In this paper, we present a new framework for establishing error bounds for a class of structured convex optimization problems, in which the objective function is the sum of a smooth convex function and a general closed proper convex function. Such a class encapsulates not only fairly general constrained minimization problems but also various regularized loss minimization formulations in machine learning, signal processing, and statistics. Using our framework, we show that a number of existing error bound results can be recovered in a unified and transparent manner. To further demonstrate the power of our framework, we apply it to a class of nuclear-norm regularized loss minimization problems and establish a new error bound for this class under a strict complementarity-type regularity condition. We then complement this result by constructing an example to show that the said error bound could fail to hold without the regularity condition. We believe that our approach will find further applications in the study of error bounds for structured convex optimization problems. |
Nov. 3rd |
Stefan Lendl Institute of Discrete Mathematics Graz University of Technology |
Time-Expanded Combinatorial Optimization Problems Abstract: For some combinatorial optimization problems their temporal dimension has been widely studied. Network flow problems and scheduling problems are major examples, where lots of such results are known. For other types of combinatorial optimization problems there are hardly any or no results at all about their temporal extension. First we introduce different models and variants of time-expanded combinatorial optimization problems. We present complexity and algorithmic results for some problem variants. This is joint work with Bettina Klinz. |
Oct. 27th via COCANA (Kelowna) |
John Chinneck Systems and Computer Engineering Carleton University |
MILP Insights and Algorithms Further details available from the COCANA Website. |
Oct. 13th |
Mahsa Faizrahnemoon Simon Fraser University |
Scheduling of a Production Line at SKF via Integer Programming Abstract: We describe a project to find the required sizes of the buffers in one of the future roller production channels in SKFs factory in Gothenburg. An integer linear programming model for finding the best schedule for the channel is developed. The model minimizes the sum of the lead times for the batches of rollers in the channel. Thereby, the total time that the rollers are kept in the buffers is minimized, which in turn minimizes the average demand for the volumes of the buffers. Since the model is time-indexed, it represents an approximation of the real scheduling problem. Therefore, post-processing is used to improve the solution obtained. We study a case from the channel at SKF and present results in the form of optimal production schedules and number of pallets of rollers in the buffers during the planning period. A comparison is made between optimal schedules obtained from the time-indexed model with different time step intervals. |
Oct. 6th *no seminar* |
PIMSAfternoon on the Mathematics of Data and Information |
There will be no O.R. Seminar on October 6th, but PIMS is hosting two interesting talks for their Afternoon on the Mathematics of Data and Information, featuring Robert Calderbank and Ingrid Daubechies. These events are located at IRMACS on the Burnaby campus, from 2 p.m. to 4:30 p.m. |
Oct. 1st *8:30-4:00* |
WCOM Hosted by UBC |
Details of the Fall 2016 West Coast Optimization Meeting here. |
Sept. 29th *SUR 2990* |
Timothy Yusun Ph.D. thesis proposal presentation Senior Supervisor: T. Stephen |
Circuit Diameters of Polyhedra and Related Conjectures Abstract: The study of polytope diameters has flourished in recent years due to the insight it provides to the performance of the simplex method for linear programming. In this thesis we consider a variant of the combinatorial diameter of polyhedra called the circuit diameter, where we allow vertex-vertex walks to enter the interior of the polyhedron, using its circuit directions. Klee and Walkup in 1967 show the equivalence of a family of conjectures about the combinatorial diameter - the Hirsch conjecture, the d-step conjecture, and the nonrevisiting conjecture. They disprove these conjectures for unbounded polyhedra in the same article, by constructing a 4-dimensional polyhedron U4 with 8 facets and diameter 5. The conjectures remained open for bounded polytopes until 2010 when a counterexample was discovered by Santos (in 43 dimensions). We will show that U4 is no longer a counterexample for the circuit version of Hirsch - that is, it has a circuit diameter of at most 4, independent of its realization. We aim to explore whether the other known counterexamples satisfy Hirsch as well. It is currently an open question whether the circuit diameter satisfies the Hirsch bound. We also propose to study a reformulation in the circuit setting of this family of conjectures. However, a more nuanced version of simplicity is needed to transfer these claims to circuit diameter. Then we explore empirical studies of simplex-like algorithms that employ circuit diameter. |
Sept. 22nd via COCANA (Kelowna) |
Yves Lucet UBC Okanagan |
Approximate Subdifferential Computation in Computational Convex Analysis Further details available from the COCANA Website. |
Sept. 15th |
Richard Hoshino Quest University |
Solving Quadratic Optimization Problems using the Cauchy-Schwarz Inequality Abstract: Quadratic Programming (QP) is an example of a non-linear programming problem, where the goal is to optimize a quadratic function on $n$ variables, subject to linear constraints on those variables. In this talk, we provide an alternative way of solving one special type of QP problem, via the Cauchy-Schwarz Inequality. We apply the technique to determine the optimal fare prices for a train network that charges its commuters by total travel distance; this will inform the work of a well-known provincial transportation organization that is in the process of conducting a comprehensive review of transit fare policy for the first time in its history. |
Sept. 8th via COCANA (Kelowna) |
Warren Hare UBC Okanagan |
Algorithmic Construction of the Subdifferential from Directional Derivatives Further details available from the COCANA Website. |
2015-2016
Friday, Aug. 5th *3:00 p.m.* *SUR 2750* |
Jingxin Guo Simon Fraser University |
Optimal Locations of On-line and Off-line Rework Stations in a Serial Production System Abstract: The production rate and product quality are two vital concerns for any manufacturing industry. Number of defective items reduces production rate and increases unit production cost. Moreover, if nonconforming items reach to the customers then manufacturer’s goodwill may drastically go down. Thus, quality inspection is treated as an inherent part of manufacturing. In this research, an N-stage serial production line with an inspection station at the end of it is considered to make decisions concerning this issue. On detecting a defective item at the end of the line it is scrapped or repaired at regular workstation or is sent to an off-line rework station for repair. Assuming each workstation produces a single type of defect a unit cost function is developed for alternative decisions on each type of defect. In order to minimise the unit cost of production and determine an appropriate decision for individual defect types, a fractional mixed integer nonlinear programming is formulated. After transformation to a mixed integer linear programming problem it is solved optimally. A small problem from garments industry is described in detail to show the solution procedure with a branch and bound method. Empirical tests with up to 40 workstations are permed to show the efficiency of the solution process. (The talk will discuss the paper: Md. Shahriar J. Hossain & B. R. Sarker (2016) Optimal locations of on-line and off-line rework stations in a serial production system, International Journal of Production Research, 54:12, 3603-3621) |
July 28th *SUR 5380* |
Hubert Pun Ivey School of Business University of Western Ontario |
Creating Competitors: When Advertising Encourages Copycats Abstract: We use a two-period game theoretical model to examine the decisions of a manufacturer and a copycat firm. Specifically, the manufacturer's product has a higher quality than that of the copycat product. The manufacturer decides on the amount of its market expansion advertising investment in the first period and on its pricing strategy in both periods; the advertising investment increases the "size of the pie," but eventually, the manufacturer may end up inadvertently sharing the benefits with the copycat. After the first period, the copycat makes a market-entry decision, and, if it opts to enter, it also decides on a pricing strategy. The customers are forward-looking strategic, and they decide whether or not to buy, when to buy, and which product to buy. We find that, interestingly, lower quality levels of the manufacturer's product may increase the manufacturer's prices and profit. Moreover, the manufacturer may be worse off when customers are more likely to purchase its product immediately rather than wait for a price reduction or for the copycat's product. Finally, the copycat may be worse off when customers may withhold their purchases in the first period in anticipation of the possibility of copycat product becoming available in a later period. |
July 21st *SUR 2750* |
Rimi Sharma Simon Fraser University |
Discrete Optimization Models in Portfolio Management Abstract: Portfolio optimization collectively as a term is a mathematical approach to making investment decisions amid a group of financial instruments or assets. This concept has been helpful in the development and understanding of financial markets and financial decision-making. In this talk we present a survey of some important MILP based approaches for solving portfolio management problems, the real features that has been incorporated in the model and the solution methodologies to the resulting models. |
Friday, June 24th *10:00 a.m.* *SUR 3040* |
Yash Aneja Odette School of Business University of Windsor |
Survivable Regenerator Location Problem Abstract: We consider the problem of locating regenerators optimally on certain nodes in an optical network to ensure that all nodes can communicate with each other even when (at most) one edge (optical fiber), of the physical network topology, fails. The quality of an optical signal propagating through a wavelength division multiple multiplexed (WDM) network deteriorates due to physical layer impairments such as optical noise, chromatic and polarization mode dispersion, cross-phase modulation and cross talk. When the quality of signal becomes unacceptable, it is necessary to carry out the 3R-generation (re-amplify, reshape and re-time) on the optical signal to bring the signal to its original quality. The optical reach is defined as the maximal distance a signal can travel before it requires the regeneration. This NP-hard problem can be formulated as a set-covering problem with an exponential number of constraints which are known only implicitly. We study the polyhedral structure and a branch-and-cut algorithm for this problem. We also consider several greedy approaches to obtaining good solutions to this problem. We show that a special case results in an approximation algorithm with a performance ratio of “ln(delta)”, the natural logarithm of delta, where delta is the maximum degree of a given graph. |
Thursday, Apr. 7th *11:30* *SUR 3290* |
J. Hung, B. Lo and F. Si Simon Fraser University |
Math 402W Operations Research Clinic project presentation Improving Public Transit Fare System: A Case Study for TransLink Fare System in Metro Vancouver |
Thursday, Apr. 7th *10:30* *SUR 3290* |
O. Jackson and I. Martinez Simon Fraser University |
Math 402W Operations Research Clinic project presentation A Queueing Network Model for Refugee Language Courses in Vancouver |
Wednesday, Apr. 6th *6:00 pm* *SUR 5360* |
Craig Mathews,Fraser Health Authority and Ali Saremi,Provincial Health Services Authority |
Canadian Operational Research Society Seminars and Networking Event Craig Mathews and Ali Saremi will discuss projects they have been involved in at Fraser Health. |
Friday, Apr. 1st *3:30 p.m.* *SUR 3040* |
Timothy Chan Mechanical and Industrial Engineering University of Toronto |
Goodness of Fit in Inverse Optimization Abstract: The classical inverse optimization methodology for linear optimization assumes a given solution is a candidate to be optimal. Real data, however, is imperfect and noisy: there is no guarantee that a given solution is optimal for any cost vector. Inspired by regression, this paper presents a unified framework for cost function estimation in linear optimization consisting of a general inverse optimization model and a corresponding goodness-of-fit metric. Although our inverse optimization model is in general nonconvex, we derive a closed-form solution and present the corresponding geometric intuition. Our goodness-of-fit metric, rho, termed the coefficient of complementarity, has similar properties to R2 from regression and is quasiconvex in the input data, leading to an intuitive geometric interpretation. We derive a lower bound for rho that possesses the same properties but is more tractable. We demonstrate the application of our framework for model estimation and evaluation in production planning and cancer therapy. |
Mar. 10th via COCANA (Kelowna) |
Sedi Bartz UBC Okanagan |
New Dominant Properties of the Proximal and the Resolvent Averages Further details available from the COCANA Website. |
Mar. 3rd via COCANA (Kelowna) |
Sébastien Le Digabel Polytechnique Montréal |
Black-Box Optimization: Algorithms and Applications Further details available from the COCANA Website. |
Feb. 11th via COCANA (Kelowna) |
Young-Heon Kim UBC |
Optimal Martingale Transport in General Dimensions Further details available from the COCANA Website. |
Feb. 4th via COCANA (Kelowna) |
Pontus Giselsson Lunds Universitet(Sweden) |
Tight Linear Convergence Rate Bounds for Douglas-Rachford Splitting Further details available from the COCANA Website. |
Nov. 26th |
Binay Bhattacharya Simon Fraser University |
Facility Locations in Dynamic Networks Abstract: This talk discusses the problem of locating facilities in dynamic networks. In dynamic flow models it takes time for the flow to pass an arc, the flow can be delayed at nodes. An edge’s capacity represents the amount of items that can enter the edge in a unit time. Dynamic flow problems require moving a set of items from source nodes to sink nodes in minimal time. Congestion occurs at a vertex when more items start waiting at the vertex to enter an edge than can immediately leave the vertex. Finding a quickest dynamic flow usually requires designing flows that avoid (too much) congestion. These problems have been motivated by the problem of evacuation planning during natural disasters such as earthquakes, weather disturbances. Each vertex of a dynamic flow network represents a source with a certain number of people. The objective of the planning is to determine an evacuation schedule to move everybody from all the source nodes to evacuation centers (sinks) using minimal time. The evacuation problem can be viewed as a generalization of classical k-center and k-median problems. This talk discusses some recent work on graph evacuation problem carried out at SFU. It is a joint work with Tiko Kameda, Ante Custic, Yuya Higashikawa and Naoki Katoh. |
Nov. 12th via COCANA (Kelowna) |
Fang Fangand Ma Hui UBC Okanagan |
Energy-Efficient Resource Scheduling for Non-Orthogonal Multiple Access (NOMA) Wireless Network: A DC Programming Approach (Fang) and Convex Analysis Based Beamforming of Decode-and-Forward Cooperation for Improving Wireless Physical Layer Security (Hui) Further details available from the COCANA Website. |
Oct. 15th via COCANA (Kelowna) |
Warren Hare UBC Okanagan |
Proximal Bundle Methods for Inexact Oracle Functions Further details available from the COCANA Website. |
Oct. 10th *8:30-4:30* |
WCOM Hosted by UBC Okanagan |
Details of the Fall 2015 West Coast Optimization Meeting here. |
Oct. 1st via COCANA (Kelowna) |
Shawn Xianfu Wang UBC Okanagan |
Strongly convex functions, Moreau envelopes, and the generic nature of convex functions with strong minima Further details available from the COCANA Website. |
Tuesday, Sept. 22nd Joint O.R. and Discrete Math Seminar *Burnaby* *1:30* *AQ K9509* |
Ilya Shmulevich Institute for Systems Biology |
Probabilistic Boolean Networks as Models of Genetic Regulatory Networks Abstract: I will present Probabilistic Boolean Networks (PBNs), which are models of genetic regulatory networks that i) incorporate rule-based dependencies between genes; ii) allow the systematic study of global system dynamics; iii) are able to cope with uncertainty; iv) permit the quantification of the relative influence and sensitivity of genes in their interactions with other genes. PBNs share the appealing rule-based properties of Boolean networks, but are robust in the face of uncertainty. The dynamics of PBNs can be studied in the context of Markov Chains, with standard Boolean networks being special cases. I will also discuss the relationship between PBNs and Bayesian networks - a family of graphical models that explicitly represent probabilistic relationships between the variables. A major objective is the development of computational tools for the identification of potential targets for therapeutic intervention in diseases such as cancer. I will describe several approaches for finding the best genes with which to intervene in order to elicit desirable network behavior. |
Sept. 17th |
Tamon Stephen Simon Fraser University |
Covering Grid Points with Rectilinear Subspaces Abstract: We consider a problem of covering a box of grid points with axis-aligned affine subspaces. The objective is to do this so that each co-ordinate hyperplane containing grid points contains a subspace from the cover, and to minimize the number of elements in the cover. We find the minimum size of such a cover, and characterize the covers attaining this minimum. This problem arises in connection with the colourful simplicial depth problem: a (mod 2) version of this result would prove the colourful simplicial depth conjecture as a corollary. In fact, an example shows that the the (mod 2) generalization does not hold. However we conjecture that it almost holds in some relevant respects. |
2014-2015
Apr. 16th via COCANA (Kelowna) |
Minh Ngoc Dao UBC Okanagan and Hanoi National University of Education |
Nonconvex Bundle Method for Constrained Optimization Problems Further details available from the COCANA Website. |
Apr. 10th *10:30* *SUR 3250* |
M. Beddis, M. Mitrovic and M. Sharma Simon Fraser University |
Math 402W Operations Research Clinic project presentation Selecting Station Locations for a Public Bike-Share Program: A Case Study for the City of Vancouver, B.C. |
Apr. 9th |
Ante Ćustić Simon Fraser University |
Geometric versions of the 3-dimensional assignment problem Abstract: In this talk we will discuss the computational complexity of special cases of the 3-dimensional assignment problem where the elements are points in a Cartesian space and where the cost coefficients are the perimeters of the corresponding triangles measured according to a certain norm. All our results also carry over to the corresponding special cases of the 3-dimensional matching problem. The minimization version is NP-hard for every norm, even if the underlying Cartesian space is 2-dimensional. The maximization version is polynomially solvable, if the dimension of the Cartesian space is fixed and if the considered norm has a polyhedral unit ball. If the dimension of the Cartesian space is part of the input, the maximization version is NP-hard for every Lp norm. This is joint work with Bettina Klinz and Gerhard Woeginger, and a preprint is available here. |
Apr. 2nd via COCANA (Kelowna) |
Jim Nastos UBC Okanagan |
Observations on problem reductions Further details available from the COCANA Website. |
Mar. 26th |
Chen Greif Computer Science University of British Columbia |
Bounds on Eigenvalues of Matrices Arising from Interior-Point Methods Abstract: Interior-point methods feature prominently among numerical methods for inequality-constrained optimization problems, and involve the need to solve a sequence of linear systems that typically become increasingly ill-conditioned with the iterations. To solve these systems, whose original form has a nonsymmetric 3-by-3 block structure, it is common practice to perform block elimination and either solve the resulting reduced saddle-point system, or further reduce the system to the normal equations and apply a symmetric positive definite solver. In this talk we use energy estimates to obtain bounds on the eigenvalues of the matrices, and conclude that the original unreduced matrix has more favorable eigenvalue bounds than the alternative reduced versions. Our analysis includes regularized variants of those matrices that do not require typical regularity assumptions. This is joint work with Erin Moulding and Dominique Orban. |
Mar. 19th via COCANA (Kelowna) |
Jason Loeppky UBC Okanagan |
TBA Further details available from the COCANA Website. |
Mar. 5th via COCANA (Kelowna) |
Mark Schmidt UBC |
Tractable Big Data and Big Models in Machine Learning Further details available from the COCANA Website. |
Feb. 26th via COCANA (Kelowna) |
Walaa Moursi UBC Okanagan |
On the range of the Douglas-Rachford operator Further details available from the COCANA Website. |
Feb. 19th via COCANA (Kelowna) |
John Braun UBC Okanagan |
Improved Density Estimation via Data Sharpening Further details available from the COCANA Website. |
Feb. 5th |
Soon-Yi Wu National Cheng Kung University (Taiwan) |
On finite convergence of an explicit exchange method for convex semi-infinite programming problems with second-order cone constraints Abstract: In this talk, we propose an explicit exchange algorithm for solving semi-infinite programming problem (SIP) with second-order cone (SOC) constraints. We prove, by using the slackness complementarity conditions, that the algorithm terminates in a finite number of iterations and the obtained solution sufficiently approximates the original SIP solution. In existing studies on SIPs, only the nonnegative constraints were considered, and hence, the slackness complementarity conditions were separable to each component. However, in our study, the existing componentwise analyses are not applicable anymore since the slackness complementarity conditions are associated with SOCs. In order to overcome such a difficulty, we introduce a certain coordinate system based on the spectral factorization. In the numerical experiments, we solve some test problems to see the effectiveness of the proposed algorithm. |
Jan. 22nd via COCANA (Kelowna) |
Chayne Planiden UBC Okanagan |
Moreau Envelopes and Thresholds of Prox-boundedness Further details available from the COCANA Website. |
Jan. 8th *SUR 5380* |
Michael Armstrong Faculty of Business Brock University |
Salvo Models for Missile Combat Abstract: Modern surface warships attack and defend using guided missiles such as the Harpoon and Standard. Because few battles have been fought this way, missile combat is not as well understood as that involving gunfire. Salvo models provide a simple way to represent such battles, much as Lanchester models represent gunfire battles. This talk will introduce salvo combat models, describe some of their properties, and demonstrate their application to the carrier airstrikes of the 1942 Battle of the Coral Sea. |
Dec. 4th via COCANA (Kelowna) |
Abbas Milani UBC Okanagan |
Applications of Modeling and Optimization Tools for Quality Improvement in Composites Manufacturing Further details available from the COCANA Website. |
Tuesday, Nov. 25th Joint O.R. and Discrete Math Seminar *Burnaby* *AQ K9509* |
Bala Krishnamoorthy Department of Mathematics Washington State University |
Flat Norm Decomposition of Integral Currents Abstract: Currents represent generalized surfaces studied in geometric measure theory. The flat norm provides a natural distance in the space of currents, and works by decomposing a d-dimensional current into d- and (the boundary of) (d+1)-dimensional pieces. A natural question about currents is the following. If the input is an integral current, i.e., a current with integer multiplicities, can its flat norm decomposition be integral as well? The answer is not known in general, except in the case of d-currents that are boundaries of (d+1)-currents in (d+1)-dimensional space. On the other hand, for the discretization of the flat norm on a finite simplicial complex, the analogous statement remains true even when the inputs are not boundaries. This result is implied by the boundary matrix of the simplicial complex being totally unimodular, guaranteeing integer solutions for an associated integer linear program. We develop an analysis framework that extends the result in the simplicial setting to that for d-currents in (d+1)-dimensional space, provided a suitable triangulation result holds. Following results of Shewchuk on triangulating planar straight line graphs, our framework shows that the discrete result implies the continuous result for the case of 1-currents in 2D space. This is joint work with Sharif Ibrahim and Kevin Vixie, and a preprint is available here. |
Tuesday, Nov. 25th *10:00 a.m.* *SUR 5060* |
Piyashat Sripratak Department of Mathematics Simon Fraser University |
The Bipartite Boolean Quadratic Programming Problem Ph.D. thesis defence |
Nov. 20th via COCANA (Kelowna) |
Guillaume Carlier Université Paris-Dauphine |
Wasserstein barycenters and related problems: theory, numerics and applications Further details available from the COCANA Website. |
Oct. 16th via COCANA (Kelowna) |
John Braun UBC Okanagan |
Lying with Statistics! Optimally Changing Data to Improve Nonparametric Function Estimates Further details available from the COCANA Website. |
Oct. 2nd via COCANA (Kelowna) |
Yuriy Zinchenko University of Calgary |
On a shortest 2-path problem Further details available from the COCANA Website. |
Sept. 21st *8:30-4:30* |
WCOM Hosted by SFU Surrey |
Details of the Fall 2014 West Coast Optimization Meeting here. |
Sept. 11th via COCANA (Kelowna) |
Jane Ye University of Victoria |
Smoothing SQP methods for solving nonsmooth and nonconvex constrained optimization problems Further details available from the COCANA Website. |
Aug. 26th *10:00-12:00* *SUR 5380* |
Yong Zhang Ph.D. thesis defence Senior Supervisor: Z.Lu |
Optimization Methods for Sparse Approximation |
2013-2014
May 29th via COCANA (Kelowna) |
Jeffrey Pang Chin How National University of Singapore |
Set Intersection Problems with Supporting Halfspaces and Quadratic Programming: More Observations Further details available from the COCANA Website. |
May 15th |
Arash Rafiey Department of Computing Science Simon Fraser University |
PTAS for Some Instances of Fair Allocation Problem Abstract: We consider the problem of fair allocation of indivisible goods where we are given a set I of m indivisible resources (items) and a set P of n customers (players) competing for the resources. Each resource j in I has a same value vj > 0 for a subset of customers interested in j and it has no value for other customers. The goal is to find a feasible allocation of the resources to the interested customers such that the minimum utility (sum of the resources) received by each of the customers is as high as possible. We are interested in instances of the problem that admit a PTAS (polynomial time approximation scheme) As an example, we demonstrate a PTAS when there exists an ordering of the resources such that each customer is interested (has positive evaluation) in a set of consecutive resources. Other variation of the resource allocation problem will be discussed. Based on joint works with R. Krishnamurti, K. Khodamoradi, and G. Stamulis. |
May 8th |
Martin Skutella Department of Mathematics TU Berlin |
Unsplittable and k-splittable flows in single-source networks Abstract: Given a network with a single source and several sinks with associated demands, we study flow problems with restrictions on the flow-carrying paths. In the unsplittable flow problem, the demand of each sink has to be satisfied along a single source-sink path. The k-splittable flow problem allows to split each demand into at most k packets such that each packet is sent along a single source-sink path. We discuss recent results and algorithms for turning an arbitrary flow into an unsplittable or k-splittable flow with bounded increase of flow values along arcs. |
Apr. 17th |
Wotao Yin Department of Mathematics UCLA |
Distributed Optimization over Network Abstract: There has been considerable recent interest in solving optimization problems with data stored over a network. For these problems we need techniques that process data locally yet converge rapidly to an (approximate) solution across the entire network. This talk reviews primarily first-order algorithms for large-scale optimization of the distributed or decentralized types. We emphasize on recognizing separable structures in a large set of signal processing and statistical learning problems and demonstrate that, through skillful uses of gradient, proximal, duality, and splitting techniques, massively parallel algorithms can be developed. Numerical results are presented to demonstrate the scalability of the parallel codes on typical Unix clusters and Amazon EC2. |
Apr. 15th *3:00-4:30* *SUR 4010* |
Krishna Teja Malladi M.Sc. thesis defense Senior Supervisor: A. Punnen |
Cluster Restricted Maximum Weight Clique Problem and linkages with Satellite Image Acquisition Scheduling Abstract: We consider the cluster-restricted maximum weight clique problem (CRCP), a variation of the well-known maximum weight clique problem. While CRCP itself is a mathematically interesting, our motivation to study the problem primarily comes from its application in the area of Satellite Image Acquisition Scheduling. Earth observing satellites revolve around the earth in specific orbits and takes images of prescribed areas requested by the clients. Not every region can be fully acquired in a single satellite pass. This necessitates the division of the region into multiple strips. There might be several image acquisition opportunities for each strip as the satellites have agility in their rolling angles. Then the Satellite Image Acquisition Scheduling Problem (SIASP) is to select the opportunities to acquire as many images as possible, without repetition, within a planning horizon while considering the image priorities and energy constraints. SIASP has a piecewise linear objective function to favor the completion of an image acquisition request and try to avoid partial acquisition of many requests. Extensive experimental study is provided on randomly generated instances based on the predicted statistics given by MDA Corporation, Richmond, Canada. These experiments are intended as a preliminary investigation on image acquisition scheduling for the Canadian RADARSAT Constellation Mission (RCM), a constellation of three satellites, which is planned to be launched in 2018. SIASP can be modeled as a CRCP with piecewise linear objective function. We provide integer programming (IP) formulations for CRCP with linear and piecewise linear objective function. We also suggest heuristic (metaheuristic) algorithms that exploit the power of modern IP solvers such as CPLEX. Experimental results using the heuristic algorithms on DIMACS and BOSHLIB benchmark instances for the clique problem and reported. Finally, an exact algorithm for CRCP along with some theoretical analysis is presented. |
Apr. 10th *3:30-5:00* *SUR 5320* |
Xueying (Sylvia) Shen M.Sc. thesis defense Senior Supervisor: A. Punnen |
Path Selection Problem in Network Design Abstract: In this thesis we study several models for the Path Selection Problem associated with the construction of fiber optic networks. Four different variations of the Greedy Path Selection Problem (GPSP), Benevolent Path Selection Problem (BPSP), Discounted Path Selection Problem (DPSP) and Path Selection with capacity expansion. All the variations are NP-hard and polynomially solvable special cases are identified for GPSP and BPfied. We also present detailed complexity analysis and integer programming formulations for these problems. Heuristic algorithms including greedy algorithm, semi-greedy algorithm and multi-start local search algorithm are developed for each problem. Extensive computational results are provided with the algorithm performance analysis. We also present some future directions for research. |
Apr. 10th *10:00-11:00* *SUR 4010* |
Yong Zhang Ph.D. thesis proposal presentation Senior Supervisor: Z. Lu |
Optimization Methods for Sparse Approximation Abstract: Recently, there are numerous applications in which sparse solutions are concerned. Mathematically, all these applications can be formulated into the l0 minimization problems. In this proposal, we first briefly introduce some interesting applications of the sparse approximation problems and discuss their formulations. We then look into one important application, that is, sparse Principal Component Analysis (PCA) which is a popular tool for data processing and dimension reduction. We discuss some drawbacks of the existing formulations for sparse PCA and propose a new formulation for it by taking into account the three nice properties of the standard PCA, that is, maximal total explained variance, uncorrelation of principal components, and orthogonality of loading vectors. We then propose a novel augmented Lagrangian method to solve this formulation. Additionally, we propose two nonmonotone gradient methods for solving the augmented Lagrangian subproblems. The augmented Lagrangian method and the nonmontone gradient methods can also be adapted to solve the l1-norm relaxation problems of other sparse approximation applications. Finally, we propose penalty decomposition methods for solving the original l0 minimization problems in which a sequence of penalty subproblems are solved by a block coordinate descent method. |
Apr. 8th *12:30-2:20* *SUR 2740* |
Operation Research Clinic project presentations | Please join us as our undergraduates from Math 402W Operations Research Clinic present their projects. Each project is an analysis of an applied problem using operations research techniques. The topics, selected by the students, are:
|
Apr. 3rd via COCANA (Kelowna) |
Samir Adly Université de Limoges |
Modern Nonsmooth Analysis and its Applications in Electrical Engineering Further details available from the COCANA Website. |
Mar. 27th via COCANA (Kelowna) |
Hung Phan University of British Columbia, Okanagan |
Linear convergence of the Douglas-Rachford method for two closed sets Further details available from the COCANA Website. |
Feb. 27th |
Zhaosong Lu Department of Mathematics SFU |
Randomized Block Coordinate Gradient Methods for a Class of Structured Nonlinear Programming Abstract: Nowadays a class of huge-scale structured optimization problems arise in some emerging areas such as machine learning. They can be reformulated as minimizing the sum of a smooth and block separable nonsmooth functions. For these problems, it is prohibitive to evaluate the full gradient of the smooth component of the objective function due to huge dimensionality and hence the usual gradient methods cannot be efficiently applied. Nevertheless, its partial gradients can often be computed much more cheaply. In this talk we study a randomized block coordinate gradient (RBCG) method for solving this class of problems. At each iteration this method randomly picks a block, and solves a proximal gradient subproblem over the subspace defined by the block that only uses a partial gradient and usually has a closed-form solution. We present new iteration complexity results for this method when applied to convex problems. We also propose a nonmonotone RBCG method for solving a class of nonconvex problems with the above structure, and establish their global convergence and iteration complexity. In addition, we present new complexity results for the accelerated RBCG method proposed by Nesterov for solving unconstrained convex optimization problems. Finally, we discuss the application of these methods for solving some support vector machine problems and report some computational results. This is a joint work with Lin Xiao at Microsoft Research, Redmond. |
Feb. 13th via COCANA (Kelowna) |
Yunier Bello Cruz Universidade Federal de Goiás |
On Forward-Backward Splitting Methods Further details available from the COCANA Website. |
Jan. 30th via COCANA (Kelowna) |
Chris Ryan The University of Chicago, Booth School of Business |
Duality theory via Fourier-Motzkin Elimination Further details available from the COCANA Website. |
Jan. 14th (Tuesday, 2:30 p.m.) *SUR 3010* |
Ray Kawai School of Mathematics and Statistics University of Sydney |
Mathematical Programming Gives Hard Bounds of the Dirichlet Problem for Partial Differential Equations Abstract: Differential equations, either deterministic or stochastic, play an indispensable role in modeling virtually every dynamics in physical and social sciences and devising efficient computational methods for differential equations is thus of paramount importance out of sheer necessity. The existing methods, such as finite difference, finite element and spectral methods, are designed to provide a good approximation of the solution, while approximation results do not provide any direct information about where and how far the true value is. We propose a novel numerical method based on mathematical programming for the Dirichlet problem for elliptic and parabolic partial differential equations without discretization. Our method is designed to provide hard upper and lower bounds of the solution by mathematical programming. The availability of hard bounds is of paramount significance as those hard bounds form a 100%-confidence interval in the context of probabilistic Monte Carlo methods. An optimization problem is formulated based upon the probabilistic representation of the solution and can be solved by semidefinite programming with the help of sum-of-square relaxation. Various theory-based techniques are developed to cope with difficult situations, such as finding a bounding polynomial function over the entire domain through a single implementation of the optimization problem. Numerical results are presented to support our theoretical developments and to illustrate the effectiveness of our methodology and techniques. |
Nov. 28th via COCANA (Kelowna) |
Nghia Tran University of British Columbia, Okanagan |
Metric Regularity of the Subdifferential Further details available from the COCANA Website. |
Nov. 14th via COCANA (Kelowna) |
Warren Hare University of British Columbia, Okanagan |
Numerical Variational Analysis for VU-theory Further details available from the COCANA Website. |
Nov. 14th *1:30 p.m.* |
Dachuan Xu Department of Applied Mathematics Beijing University of Technology |
Primal-dual approximation algorithm for the two-level facility location problem via a dual quasi-greedy approach Abstract: The main contribution of this work is to propose a primal-dual combinatorial 3(1+epsilon)-approximation algorithm for the two-level facility location problem (2-LFLP) by exploring the approximation oracle concept. This result improves the previous primal-dual 6-approximation algorithm for the multilevel facility location problem, and also matches the previous primal-dual approximation ratio for the single-level facility location problem. One of the major merits of primal-dual type algorithms is their easy adaption to other variants of the facility location problems. As a demonstration, our primal-dual approximation algorithm can be easily adapted to several variants of the 2-LFLP, including models with stochastic scenario, dynamically arrived demands, and linear facility cost, respectively. (Joint work with Chenchen Wu and Donglei Du.) |
Oct. 29th *10:00 a.m.* *SUR 2980* |
Xiaorui Li Department of Mathematics Simon Fraser University |
Sparse Inverse Covariance Selection with Non-Convex Regularizations M.Sc. thesis defence |
Oct. 17th via COCANA (Kelowna) |
Liangjin Yao Carma, University of Newcastle |
Sum theorems for maximally monotone operators of type (FPV) Further details available from the COCANA Website. |
Oct. 3rd via COCANA (Kelowna) |
Hristo Sendov Statistics and Actuarial Science, University of Western Ontario |
Loci of Complex Polynomials Further details available from the COCANA Website. |
Sept. 19th via COCANA (Kelowna) |
Pooyan Shirvani Ghomi Mathematics, University of Calgary |
A moment-based approach for DVH-guided radiotherapy treatment plan optimization Further details available from the COCANA Website. |
Sept. 5th via COCANA (Kelowna) |
Bastien Talgorn GERAD |
Constrained Blackbox Optimization for Engineering Design Further details available from the COCANA Website. |
July 18th * 1 p.m. * *SUR 2995* |
I-Lin Wang Department of Industrial and Information Management National Cheng Kung University |
Operations Research and Logistics Issues raised in Bike Sharing Systems Abstract: Bike sharing systems (BSS) have been popular in several modern metropolitan areas. A bike is taken from a rental site and returned in another to help deal with the first and last mile problem for connecting passengers to public transit systems. The success of BSS relies on effective logistics management in its bikes. In particular, bikes need to be moved between rental sites to satisfy more demands in taking and returning bikes. In this talk, the speaker will explain how BSS works, how to model the network design problem that seeks the best locations for rental sites, how to balance the imbalanced bike flows caused by the rental demands, and how to solve these problems by efficient heuristics. |
2012-2013
No listing
2011-2012
August 9th |
Zhipeng Lü Department of Computer Science Huazhong University of Science and Technology |
Advanced Metaheuristics for Curriculum-Based Course Timetabling Abstract: In operational research, computer science and artificial intelligence, there exist a large number of large scale combinatorial optimization problems that have been proven to be NP-hard. For these problems, exact algorithms can only be scalable to problems of very limited size. However, metaheuristic algorithms can solve large scale NP-hard problems effectively within a reasonable computational time. In designing a powerful metaheuristic algorithm, the essential point is to obtain a good tradeoff between intensification and diversification and consider the specific problem structure in designing search operators. One case study on the university course timetabling problem is presented to show how to design an effective metaheuristic algorithm for NP-hard problems. I will present an Adaptive Tabu Search algorithm (denoted by ATS) for solving a problem of curriculum-based course timetabling. The proposed algorithm follows a general framework composed of three phases: initialization, intensification and diversification. The initialization phase constructs a feasible initial timetable using a fast greedy heuristic. Then an adaptively combined intensification and diversification phase is used to reduce the number of soft constraint violations while maintaining the satisfaction of hard constraints. The proposed ATS algorithm integrates several distinguished features such as an original double Kempe chains neighborhood structure, a penalty-guided perturbation operator and an adaptive search mechanism. Computational results show the high effectiveness of the proposed ATS algorithm, compared with five reference algorithms as well as the current best known results. This paper also shows an analysis to explain which are the essential ingredients of the ATS algorithm. |
July 19th |
Paul Hsing Luh Department of Mathematical Sciences National Chengchi University |
Estimating the loss probability under heavy traffic conditions Abstract: This talk presents a multiple-server queueing model under the assumptions of renewal arrival processes and limited buffer size. An approximation for the loss probability and the asymptotic behavior are studied under the heavy traffic conditions. We present an asymptotic analysis of the loss probability when both the arrival rate and number of servers approach infinity. In illustrative examples, the loss probabilities are estimated with heavy traffic under three common distributions of inter-arrival times: exponential, deterministic and Erlang-r distributions, respectively. |
June 21st *SUR 2990* |
Chen Xu Department of Statistics University of British Columbia |
The Sparse MLE for Variable Screening in Ultra-High Dimensional Feature Space Abstract: Feature selection is fundamental for modeling the high dimensional data, where the number of features can be huge or even larger than the sample size. Reducing the dimension of feature space is essential in such analysis. Motivated from the seminal theory of sure independence screening (SIS; Fan and Lv (2008)), we investigate another screening method via the sparsity-restricted maximum likelihood estimator to remove most irrelevant features before the formal selection. Compared with the SIS, which screens features based on the marginal correlations, the new method accounts for more joint effects between the covariates and thus can be more reliable in applications. We establish the consistency of proposed method in the context of (ultra) high dimensional generalized linear models and further illustrate its decent performances through numerical examples. |
June 4th 3:30 |
Bruce Shepherd Department of Math and Stats McGill University |
On combinatorial optimization problems with side constraints Abstract: We describe several network flow and network design problems where recent changes in technology required strengthening the notion of "feasibility". In each case, we try to assess the impact or "cost" of adding the new constraint. Some models we consider include resilient, unsplittable and confluent flows. |
June 4th **10:30** |
Bin Dong Department of Mathematics University of Arizona |
Sparse Approximation by Wavelet Frames and Applications Abstract: Wavelet frames are known to be able to sparsely approximate piecewise smooth functions, which has recently been used for solving ill-posed linear inverse problems such as image restoration and computed tomography. In the first half of this talk, I will start with a brief review of wavelet frame based models and their connections to variational models. Then I will present a model (as well as some fast algorithms) that penalizes the l0-norm of the frame coefficients, instead of the commonly used l1-norm. Numerical experiments show that using the l0-norm has advantages for some specific type of images. The second half of the talk is focused on the applications of wavelet frame based models to x-ray based CT image reconstruction. |
March 29th |
Yuriy Zinchenko Department of Math and Stats University of Calgary |
Polytopes and Arrangements: Diameter and Curvature Abstract: We introduce a continuous analogue of the Hirsch conjecture and a discrete analogue of the result of Dedieu, Malajovich and Shub. We prove a continuous analogue of the result of Holt and Klee, namely, we construct a family of polytopes which attain the conjectured order of the largest total curvature, and a continuous analogue of a d-step equivalence result for the diameter of a polytope. Potential extensions of this work will be highlighted. Based in part on joint work with Antoine Deza (McMaster) and Tamas Terlaky (Lehigh). |
February 23rd |
Roger Yu Department of Mathematics and Statistics Thompson Rivers University |
Scheduling Pilot Training Problem Abstract: In this talk, we will report on an Engage Grant Project "Scheduling Pilot Training Problem" conducted with three TRU students. This is joint project with Pelesys Learning System Inc., a Richmond company. The project is to build a system to assist in the scheduling of training/re-certificating for airline pilots. After building multiple mathematical models for this purpose, we created Recurrent Training Model 5, a model that is capable of assigning pilots, instructors, rooms, and time slots to the courses involved in the training. This model has been implemented in the optimization software called AIMMS, and in addition to simply providing a schedule for training also allows data to be imported from a database for ease of data entry and utilizes search heuristic (e.g., tabu) to assist in generating a good schedule, with all its features easily accessible from a user-friendly GUI. |
February 9th |
Daniel Karapetyan Department of Mathematics Simon Fraser University |
Efficient Local Search Algorithms for Known and New Neighborhoods for the Generalized Traveling Salesman Problem Abstract: The Generalized Traveling Salesman Problem (GTSP) is a well-known combinatorial optimization problem with a host of applications. It is an extension of the Traveling Salesman Problem (TSP) where the set of cities is partitioned into so-called clusters, and the salesman has to visit every cluster exactly once. While the GTSP is a very important combinatorial optimization problem and is well studied in many aspects, the local search algorithms used in the literature are mostly basic adaptations of simple TSP heuristics. Hence, a thorough and deep research of the neighborhoods and local search algorithms specific to the GTSP is required. We formalize the procedure of adaptation of a TSP neighborhood for the GTSP and classify all other existing and some new GTSP neighborhoods. For every neighborhood, we provide efficient exploration algorithms that are often significantly faster than the ones known from the literature. Finally, we compare different local search implementations empirically. This is joint work with Gregory Gutin. |
January 26th |
Joe Qranfal Department of Mathematics Simon Fraser University |
A Fast Computational Algorithm for Segmentation of Noisy Medical Images Abstract: We show here that the implementation a Markov random fields image segmentation algorithm works well for the purpose of denoising and segmenting medical images. One of the main contributions here is the ability for a user to manipulate online the image so as to achieve clear delineation of objects of interest in the image. This is made possible by the efficiency of the implementation. Results are presented for images that are generated by Single Photon Emission Computed Tomography and Magnetic Resonance Imaging. The results show that the method presented is effective at denoising medical images as well as segmenting tissue types, organs, lesions, and other features within medical images. We advocate that this method should be considered as part of the medical imaging toolbox. |
January 20th *Friday, 2:30 p.m.* *SUR 3120* |
Michael Monagan Department of Mathematics Simon Fraser University |
Computing Tutte polynomials Abstract: The Tutte polynomial from graph theory is the most general graph invariant that can be computed by edge deletion and contraction. Because it includes the Chromatic polyomial as a special case, it is NP-hard to compute, and a straight-forward implementation of the edge deletion and contraction algorithm, like the one in Mathematica, takes O( 1.6^(n+m) ) time where n=|V| and m=|E|. In recent work, Haggard, Pearce and Royle studied edge selection heuristics and used the graph normal form in Brendan McKay's "nauty" package to speed up the edge deletion contraction algorithm. They succeeded in computing the Tutte polyomial of the truncated icosahedron graph, a graph with 32 vertices and 90 edges, in one week on 150 computers. [ See here. ] In this talk we will present a new edge selection heuristic which we have implemented in vanilla Maple. It dramatically improves the computation time and surprisingly, computes the Tutte polynomial of some generalized Petersen graphs in polynomial time. We will present also the connection of the Tutte polynomial with the reliability polynomial from network theory which is what motivated us to consider computing Tutte polynomials and we comment about what facilities in Maple aided the implementation. Our examples are presented using Maple's GraphTheory package which a group of students and faculty at SFU developed. No prior knowledge of Tutte polynomials is assumed. |
December 6th *1:30 p.m.* |
Timothy Yusun Department of Mathematics Simon Fraser University |
Dedekind Numbers and Related Sequences M.Sc. thesis defence Abstract: This thesis considers Dedekind numbers, where the nth Dedekind number D(n) is defined as the number of monotone Boolean functions (MBFs) on n variables, and R(n), which counts non-equivalent MBFs. The values of D(n) and R(n) are known for up to n = 8 and n = 6, respectively; in this thesis we propose a strategy to compute R(7) by considering profile vectors of monotone Boolean functions. The profile of an MBF is a vector that encodes the number of minimal terms it has with 1;2; : : : ;n elements. The computation is accomplished by first generating all profile vectors of 7-variable MBFs, and then counting the functions that satisfy each one by building up from scratch and taking disjunctions one by one. As a consequence of this result, we will be able to extend some sequences on the Online Encyclopedia of Integer Sequences, notably R(n) up to n = 7, using only modest computational resources. |
December 6th *11:00 a.m.* |
Sara Taghipour Department of Mathematics Simon Fraser University |
Quadratic Balanced Optimization Problems M.Sc. thesis defence Abstract: We introduce the Quadratic Balanced Optimization Problem (QBOP) and study its complexity. QBOP is NP-hard even for very special cases such as Quadratic Balanced Knapsack Problem (QBKP) and Quadratic Balanced Assignment Problem (QBAP). Several general purpose algorithms are proposed and tested on the special cases of QBKP and QBAP. Polynomial solvable special cases are also identified. |
November 24th *12:30 p.m.* |
Mehrnoush Malekesmaeili Department of Mathematics Simon Fraser University |
On Certificates that a Matrix does not have the Consecutive Ones Property M.Sc. thesis defence Abstract: A binary matrix has the consecutive ones property (C1P) if there exists a permutation of its columns which makes the 1s consecutive in every row. The C1P has many applications which range from computational biology to optimization. We give an overview of the C1P and its connections to other related problems. The main contribution of this thesis is about certificates of non-C1Pness. The notion of incompatibility graph of a binary matrix was introduced in [McConnell, SODA 2004] where it is shown that odd cycles of this graph provide a certificate for a non-C1P matrix. A bound of k+2 was claimed for the smallest odd cycle of a non-C1P matrix with k columns. We show that this result can be obtained directly via Tucker patterns, and that the correct bound is k + 2 when k is even, but k + 3 when k is odd. Furthermore we empirically study the minimal conflicting set certificate on synthetic data. |
November 3rd |
Piyashat Sripratak Department of Mathematics Simon Fraser University |
Facets of the Boolean Quadric Polytope Abstract: In this talk, we introduce a linearization of the unconstrained quadratic zero-one program in n variables. We denote the convex hull of solutions the Boolean quadratic polytope. Based on Manfred Padberg's work; The Boolean Quadric Polytope: some Characteristics, Facets and Relatives, we show all trivial facets and three families of facets of the polytope. There are O(3n) facets in these four types. At the end of the talk, we give some special cases and a generalization that are polynomially solvable or all facets are describable. |
October 20th *1:30 p.m.* |
Annie Zhang Department of Mathematics Simon Fraser University |
Quadratic Bottleneck Problems: Algorithms, Complexity and Related Topics Ph.D. thesis defence Abstract: In this thesis we study the quadratic bottleneck combinatorial optimization problem (QBCOP) which generalizes the well-known class of bottleneck problems. The solvability of QBCOP is linked to that of the linear combinatorial optimization problems with conflict constraints. Various properties and algorithms for these classes of problems are developed and special cases of spanning trees, knapsack type problems, and assignment variations are explored. These problems are shown to be strongly NP-hard even on very special graphs. We then identify polynomially solvable special cases and also develop heuristics algorithms. Experimental results are reported for all the heuristics we developed. As a by-product our work, we have an approximation algorithm for the maximum edge clique partitioning problem, improving the best known performance ratio for the problem. |
2010-2011
July 14 **SUR 3010** |
Application of computational geometry to network location problems Abstract: One of the objectives of this talk is to show that the tools from computational geometry can be effectively applied to solve location problems in networks. In particular we show that the p-center problem in general networks can be transformed to a geometric problem known as Klee's measure problem (KMP). When the underlying network is a partial k-tree (k-fixed), we showed that the p-center problem could be efficiently solved by transforming the problem to a range search problem. |
June 24 **Friday** |
The Generalized Quadratic Assignment Problem (GQAP) Abstract: The Generalized Quadratic Assignment Problem (GQAP) covers a broad class of problems that involve the minimization of a total pair-wise interaction cost among M equipments, tasks or other entities and where placement of these entities into N possible destinations is dependent upon existing resourc capacities. These problems arise naturally in yard management, where containers are to be located in the storage areas with limited capacity, and in distributed computing where processing tasks are to be assigned to processors with limited computing resources. This problem generalizes the well-known quadratic assignment problem (QAP), one of the most difficult problems in the combinatorial optimization field. Previous studies on the GQAP are relatively limited. The GQAP is NP-hard, and it is NP-hard to approximate. |
June 6 **Monday, @ 2 p.m. SUR 3290** |
Hybrid Heuristics for Routing of Barge Container Ships Abstract: The optimization of inland transport routes of barge container ships with the objective to maximize the profit of a shipping company is investigated. This problem consists of determining the upstream and downstream calling sequence and the number of loaded and empty containers transported between any two ports. Combinatorial and MIP formulations will be presented. The problem is initially approached by improved variants of existing MIP heuristics: Local Branching, Variable Neighborhood Branching, and Variable Neighborhood Decomposition Search. A new heuristic is then presented. The heuristic is based on the combination of the two formulations and aims to generate better quality solutions of the considered problem. The proposed mixed-formulation Local Search (LS) represents a good basis for the implementation of LS-based meta-Heuristic methods and a Multi-start Local Search within this framework will be presented. The comparison of the proposed approach with the state-of-the-art MIP-based heuristics will be reported. This research has been done with V. Maras, J. Lazić and N. Mladenović. |
April 21 |
A Certifying Algorithm for the Consecutive Ones Property Abstract: This is a presentation of the results in R. McConnell's paper on "A Certifying Algorithm for the Consecutive Ones Property" (proceedings of SODA 2004). |
March 31 |
Schur Complement-Based Preconditioners for Saddle-Point Linear Systems Abstract: Block preconditioners play a prominent role in the numerical solution of saddle-point linear systems arising from discretization of partial differential equations and large-scale constrained optimization problems. In this talk I will provide an overview of preconditioning approaches based on primal and dual Schur complements. A key for their efficiency is the ability to characterize the spectra of the underlying continuous operators. A challenge of particular interest is how to deal with situations in which the (1,1) block of the saddle-point matrix is nearly singular. We show that in such situations, the primal Schur complement related to the augmented Lagrangian methodology can be quite effective. The analytical results are complemented by numerical examples. |
February 24 |
Santos' counterexamples to the Hirsch conjecture and prismatoids Abstract: In this talk we describe Santos' recent construction of counterexamples to the famous conjecture of Hirsch (1957): that a d-dimensional polytope with n facets has diameter at most n-d.� The construction highlights a particular 5-dimensional "prismatoid" polytope. In joint work with Hugh Thomas, we give a simple proof that there is no analogous 4-dimensional prismatoid. |
February 10 |
Practical Methods for Computing Sparse or Low-Rank Solutions Abstract: Nowadays there are numerous emerging applications in science and engineering concerning about sparse or low-rank solution such as compressed sensing, image recovery and dimension reduction. In this talk, we propose some practical methods for computing sparse or low-rank solutions. In particular, we study a novel first-order augmented Lagrangian (AL) method for solving a class of nonsmooth constrained optimization problems which include l1-norm and nuclear-norm regularized problems as special cases. The global convergence of this method is established. We also develop two first-order methods for solving the associated AL subproblems and establish their global and local convergence. In addition, we propose penalty decomposition methods for solving l0-norm and rank minimization problems. Under some suitable assumptions, we show that each accumulation point is a stationary point of an equivalent smooth optimization problem. Finally, we demonstrate the computational performance of these methods by applying them to sparse principal component analysis and sparse logistic regression. |
January 20 |
Travelling Salesman Problems with Time Windows and Applications Abstract: The traveling salesman problem with time windows(TSPTW) is an instance of TSP with an extra constraint of visiting each city in a certain time interval.� A special case of this problem will be discussed.�� Small instances are solved with Cplex, while a Heuristic method is proposed for larger instances. |
December 3 **Friday, 3 p.m.** |
Generalized Travelling Salesman Problems on Halin Graphs M.Sc. thesis defence (senior supervisor: Abraham Punnen). |
December 3 **Friday, 1 p.m.** |
Algorithms and Theoretical Topics on Selected Combinatorial Optimization Problems M.Sc. thesis defence (senior supervisor: Abraham Punnen). |
November 25 |
A quadratic kernel for k-Vertex-Deletion r-regular Subgraph Abstract: One way of thinking a vertex cover of a graph, is as a set of vertices which removal leaves an stable set. Also, any stable set can be viewed as a 0-regular graph, and then a natural generalization of Vertex Cover is to ask for a subset of at most k vertices which deletion leaves an r-regular subgraph, where r is some fixed integer. This problem was introduced in by Moser and Thilikos, and as it is expected, it is NP-hard for each fixed r. A parameterized problem is said to be Fixed Parameter Tractable (FPT) if there exists a function f and an algorithm that solves the problem in O(f(k)n^q) running time, where k is the parameter, n is the size of the input and q is a constant. There are several techniques to show that a parameterized problem is FPT, being kernelization one of the most applied. In the case of Vertex Cover, using a classical result of Nemhauser and Trotter we can deduce the existence of a kernel of size 2k, which is the best possible so far. Motivated by this, Moser and Thilikos found an algorithm for the general problem that produces a kernel that is cubic in k for each r>1, but that turns out quadratic in k for r=1. In this talk, we review this technique and see how to construct a kernel that is quadratic in k for each r>0. This is based on a joint work with Stephan Thomasse. |
November 18 |
No O.R. Seminars: Computational Biology Seminar in Burnaby. |
November 4 |
SOS and SDP relaxations of sensor network localization Abstract: The sensor network localization problem has received much attention in recent years. This problem is NP-hard and thus various convex relaxation techniques have been applied to approximate it. In this talk, we compare the strength of SDP, ESDP and sparse-SOS relaxations. Like the SDP and ESDP relaxations, we show that zero individual trace for sparse-SOS relaxation also certifies accuracy of sensor position when distance measurements are exact. We show by a counterexample that this condition is no longer sufficient in the presence of distance measurement noise, for all three relaxations. This is based on joint work with Joao Gouveia and Paul Tseng. |
October 21 |
Parametric Stochastic Programming Models for Call-Center Workforce Scheduling Abstract: We develop and test an integrated forecasting and stochastic programming approach to workforce management in call centers. We first demonstrate that parametric forecasts can be used to drive stochastic programs whose results are stable with relatively small numbers of scenarios. We then extend our approach to include forecast updates and two-stage stochastic programs with recourse. We use experiments with two large sets of call-center data to explore the importance of the use of scenarios and the use of recourse actions. This is joint work with N. Gans, H. Shen, N. Korolev, A. McCord, and H. Ristock. |
October 7 |
Recognizing Graph Properties Through Polynomial Ideals Abstract: In the work of M. Laurent, J. Lasserre; P. Parrilo, Y.Nesterov, and others, optimization problems which are modeled by zero-dimensional radical ideals have been shown to have a finite sequence of semidefinite programs that converge to an optimal solution. This has important implications for combinatorial optimization as it allows for a paradigm to solve many problems in polynomial time. An example is integer optimization over the assignment polytope. In this talk I will give an overview of this theory and show applications to the study of special subsets of the assignment polytope. As a nice application I will discuss consequences for the well-known automorphism and isomorphism problem for graphs. This is joint work with Jesus De Loera, Christopher Hillar, and Peter N. Malkin. |
September 23 and 30 |
No O.R. Seminars: Computational Biology Seminars in Burnaby. |
2009-2010
August 18 *Wednesday* |
Analysis of bandwidth allocation with elastic demand on networks under budget constraints Abstract: This paper considers the problem of bandwidth allocation on end-to-end communication networks with multi-class traffic, where bandwidth is determined optimally under the budget and network constraints. We derive the blocking probabilities with respect to bandwidth, traffic demand and the available number of end-to-end paths based on Erlang loss formula for all service classes. Depending upon the blocking probability, the paper presents different performance metrics, such as budget ratio, utilization level and bandwidth elasticity of blocking. Monotonicity and convexity of blocking probabilities with different parameters, such as allocated bandwidth, traffic demand and the number of end-to-end paths are also discussed. This is joint work with Chia-Hung Wang. |
April 22 *SUR 15-300* |
Phylogeny-based Analysis of Gene Clusters Abstract: The order of genes in genomes provides extensive information. In comparative genomics, differences or similarities of gene orders are determined to predict functional relations of genes or phylogenetic relations of genomes. For this purpose, various combinatorial models can be used to identify gene clusters � groups of genes that are co-located in a set of genomified approach to model gene clusters and define the problem of labeling the inner nodes of a given phylogenetic tree with sets of gene clusters. Our optimization criterion in this context combines two properties: (1) Parsimony, i.e. the number of gains and losses of gene clusters has to be minimal. This aim can be approached using well known methods, like Fitch-Hartigan. (2) Consistency, i.e. for each ancestral node, there must exist at least one potential gene order that contains all the reconstructed clusters. Verifying this property is far from being trivial. We give an overview of already known and our new results for solving the "consistency problem" for different gene cluster models. Our findings range from a simple and efficient solution for adjacencies to NP-completeness for other models.� We developed an oracle-based algorithm to reconstruct optimal labeling and finally show results on both simulated and real data. |
April 15 |
The Quickest Flow Problem Abstract: We discuss the paper of Burkard, Dlaska and Klinz on the quickest flow problem. |
March 25 |
Data-Driven Inventory Management Abstract: We consider inventory planning problems where the distribution of demand distribution is not available a priori, and lost sales are not observable. We take a non-parametric approach, and propose adaptive algorithms that generate a sequence of ordering decisions over time, where the decision in each period depends only on historical sales data.� We show that our adaptive algorithms converge to the optimal solution, and establish the convergence rate.� Our results are based on recent developments in on-line convex optimization. |
March 18 SUR 14-400 and IRMACS 10901 |
Sparse topology selection in graphical models of time series Abstract: Graphical models give a graph representation of conditional independence relations between random variables.� Estimating the model from observed data therefore generally includes the selection of the graph topology as well as the parameters in the model. In a Gaussian graphical model, the topology of the graph specifies the sparsity pattern of the inverse covariance matrix, and several topology selection methods based on convex optimization and 1-norm regularization have been proposed recently. In this talk we consider an extension to graphical models of autoregressive Gaussian time series.� We discuss the problem of maximum likelihood estimation of autoregressive models with conditional independence constraints, and convex techniques for topology selection via nonsmooth regularization. |
March 9 **Tuesday** |
On Methods for Solving Nonlinear Semidefinite Optimization Problems Abstract: The nonlinear semidefinite optimization problem arises from applications in system control, structural design, financial management, and other fields. However, much work is yet to be done to effectively solve this problem. We introduce some new theoretical and algorithmic development in this field. In particular, we discuss first and second-order algorithms that appear to be promising, which include the alternating direction method, the augmented Lagrangian method, and the smoothing Newton method.� Convergence theorems are presented and preliminary numerical results are reported. |
January 8 **Friday, 12:30** |
The Bottleneck Traveling Salesman Problem and Some Variations M.Sc. Thesis defense (senior supervisor: Abraham Punnen) |
December 3 **Thursday, 11:30** |
First-Order Methods for Nuclear Norm Minimization and its Applications M.Sc. Thesis defense (senior supervisor: Zhaosong Lu) |
November 16 SUR 14-400 |
Gene trees and species trees: parsimony problems Abstract: A gene family is a set of genes, present in the genomes of several genomes, possibly in multiple occurrences in some genomes, that all originates from a single ancestral gene. A gene tree is a binary tree that describes evolutionary relationships between the genes of a same family, in terms of three kinds of events: speciations, duplications and losses.� Phylogenomics aims at inferring, from a set of gene trees, a species tree.� Here we consider the following NP-complete optimization problem: infer the species tree that minimizes the number of gene duplications. I will present two results: - a description of tractable sets of gene trees (work with J.-P. Doyon and � N. El-Mabrouk, Universite de Montreal) |
November 2 SUR 14-400 |
Introduction to multiple objective programming and simplex method for solving bi-objective programming Abstract: In this seminar, main definitions and theorems of multiple objective programming (a linear programming consisting of several objective functions) will be mentioned. Also, solving bi-objective programming will be discussed by extending simplex method for solving these problems. (We assume familiarity with simplex method.) |
October 26 SUR 14-400 |
Recent progress in the application of semidefinite programming to discrete optimization Abstract: The max-cut algorithm of Goemans and Williamson is 15 years old in 2009, and its impact in the area of semidefinite programming has been remarkable. In this talk, I will survey some of the main modelling and algorithmic developments since 1994 in the application of semidefinite programming to discrete optimization problems. I will also highlight promising directions for research in this area.� |
October 5 |
Design in Inverse Problems Abstract: While there was much attention given to solving inverse problems there is a gap in the question of design, that is, the design of experiments for ill-posed problems and the design of regularization functionals. In this talk we will present a framework to attack this problem. We show that it leads to a stochastic bilevel optimization problem and suggest algorithms for its solution.� |
September 16 *Wednesday* **3:30** |
Necessary optimality conditions for bilevel programming problems |
2008-2009
July 27**2:30** *14-400* |
Combinatorial optimization problems for genome rearrangement phylogeny Abstract: TBA |
April 6
|
Extending Lovasz's Theta Body of a Graph to all Real Varieties Abstract: The theta body of a graph, constructed by Lovasz about 30 years ago, is one of the first examples of SDP approximation in discrete optimization. This body has a not-so-well-known definition in terms of sums of squares polynomials that appears in papers by Lovasz in the 1990s. Using this definition and a question he raises, we define a hierarchy of theta bodies for any real algebraic variety (real solutions to a polynomial system over the reals). We prove that these theta bodies are SDP relaxations of the convex hull of real varieties and are in fact a version of Lasserre's relaxations. For the max cut problem this hierarchy appears to be new. When the real variety is finite (as is usual in combinatorial optimization), we give a complete characterization of when the first theta body in the hierarchy equals the convex hull of the variety. Joint work with Joao Gouveia and Pablo Parrilo. |
April 2*Thursday* |
On the Implementation of a Primal-Dual Interior Point Method Abstract: We discuss Mehrotra�s predictor-corrector technique in interior point methods. |
March 30
|
Presolving in Linear Programming Abstract: We discuss Andersen and Andersen�s paper on presolving in linear programming. |
March 23 |
Improving Patient Flow and Resource Utilization in an Ambulatory Clinic Through Simulation Modelling Abstract: In this talk we consider an ambulatory care unit (ACU) in a large cancer centre, where operational and resource utilization challenges led to overcrowding, excessive delays, and concerns regarding safety of critical patient care duties. We use simulation to analyze the simultaneous impact of operations, scheduling, and resource allocation on patient wait time, clinic overtime, and resource utilization. The impact of these factors has been studied before, but usually in isolation. Further, our model considers multiple clinics operating concurrently, and includes the extra burden of training residents and medical students during patient consults. Through scenario analyses we found that the best outcomes were obtained when not one but multiple changes were implemented simultaneously. We developed configurations that achieve a reduction of up to 70% in patient wait times and 25% in physical space requirements, with the same appointment volume. We will present and discuss the key findings of this study and potential extensions to other outpatient services. |
March 16**SUR 4040** |
A Redistributed Proximal Bundle Method for Nonconvex Optimization Abstract: Proximal bundle methods have been shown to be highly successful optimization methods for nonsmooth convex optimization. This naturally leads to the question of whether bundle methods can be extended to work for nonconvex problems. In this talk we review some past (convex) results for proximal bundle methods, and demonstrate a method for extending classic bundle methods to a nonconvex setting. The method is based on generating cutting planes model not of the objective function (as most bundle methods do) but of a local convexification of the objective function. The convexification parameter is calculated on the fly,� which allows for both strong convergence results and the ability to inform the user as to what proximal parameters are sufficiently large to ensure a unique proximal point of the model functions. This novel approach is sound from both the primal and dual perspective, which will make it possible to create workable nonconvex algorithms based on nonconvex VU theory. |
March 12**Thursday** |
A Newton-CG Augmented Lagrangian Method for Large Scale SDPs Abstract: We consider a Newton-CG augmented Lagrangian method for solving semidefinite programming (SDP) problems from the perspective of approximate semismooth Newton methods. In order to analyze the rate of convergence of our proposed method, we characterize the Lipschitz continuity of the corresponding solution mapping at the origin. For the inner problems, we show that the positive definiteness of the generalized Hessian of the objective function in these inner problems, a key property for ensuring the efficiency of using an inexact semismooth Newton-CG method to solve the inner problems, is equivalent to the constraint nondegeneracy of the corresponding dual problems. Numerical experiments on a variety of large scale SDPs with the matrix dimension n up to 4,110 and the number of equality constraints m up to 2,156,544 show that the proposed method is very efficient. We are also able to solve the SDP problem fap36 (with n=4,110 and m=1,154,467) in the 7th DIMACS Implementation Challenge much more accurately than in previous attempts. [This is a joint work with Xin-Yuan Zhao and Kim Chuan Toh at the National University of Singapore] |
March 12**Thursday** 4:30 p.m. |
Large-scale convex optimization over matrices for multi-task learning Abstract: A recently proposed approach to multi-task learning entails minimizing a function of the form: f(Q,W) = trace(W'Q-1W) + trace(Q) + ||AW-B||2 where Q is an n by n real matrix constrained to be positive definite, and W is an n by m real matrix. Here A is a p by n real matrix whose columns correspond to biological images, B is a p by m real matrix of measured data, p is the number of data points, m is the number of tasks. Q-1 is a (unknown) covariance matrix.�� In an application to Drosophila gene expression pattern analysis, m, n, p can be up to 100, 2000, 3000, so the number of variables is about 2 million. Worse, the minimum is typically not attained. We show that this problem can be reduced to a semidefinite program (SDP) that always has an optimal solution. The SDP is still too large to be solved by general SDP solvers. By considering the dual, we show that the dual problem reduces to a convex quadratic optimization problem over n by m real matrices and subject to a matrix version of ball constraint. Moreover, projection onto the "ball" can be done efficiently using singular value decomposition. An implementation of Nesterov's accelerated gradient projection method finds a sufficiently accurate solution of the original problem in a matter of minutes on most instances. This is joint work with Ting Kei Pong (Univ. Washington) and Jieping Ye (Arizona State Univ.) |
February 23
|
Metamodel-based Design Optimization (MBDO) in Support of Modern Engineering Design Abstract: Modern engineering design features computation-intensive analyses and Simulations. Tools such as finite element analysis (FEA), computational fluid dynamics (CFD), are widely used in engineering design applications. How to integrate these various computation-intensive tools to support design synthesis and optimization has been a challenging task. This talk will start with an introduction of such a challenge, after which the inadequacy of classic gradient-based optimization methods will be analyzed from the perspective of design support. To tackle the challenge, a promising approach, metamodel- based design optimization (MBDO), will be introduced. Metamodel, literally meaning "model of model," is a simple surrogate of the computation-intensive function, on which further analysis and optimization is performed. One MBDO approach for multi-objective optimization, the Pareto Set Pursuing (PSP) method, will be described in detail. PSP overcomes common difficulties of MBDO approaches, demonstrates high efficiency for multi-objective optimization, and represents an innovative interpretation of optimization. Testing and application of PSP will also be given. Future research topics on MBDO will be highlighted at the end. This talk should be of interests to researchers in complex system identification, sampling, modeling, simulation-based design, and optimization. |
February 2
|
Energy and Quality Optimization in Mobile TV Broadcast Networks Abstract: Mobile TV networks enable users to watch their favorite TV shows on small hand-held devices while traveling. Market research forecasts that mobile TV will grow to be a multi-billion dollar industry with several hundred million subscribers in the next few years. In this talk, we will first give a brief overview of mobile TV networks highlighting some of the main research issues in them. Then, we will focus on the problem of minimizing the energy consumption of mobile devices while maximizing the visual quality of different TV channels. More specifically, in mobile TV broadcast networks, the base station broadcasts TV channels in bursts such that mobile devices can receive a burst of traffic and then turn off their radio frequency circuits till the next burst in order to save energy. The base station must carefully construct the burst transmission schedule for all TV channels. This is called the burst scheduling problem. We prove that the burst scheduling problem for TV channels with arbitrary bit rates is NP-complete. We then propose a practical simplification of the general problem, which allows TV channels to be classified into multiple classes with bit rates that have power of two increments, e.g., 100, 200, and 400 kbps. Using this practical simplification, we propose an optimal and efficient burst scheduling algorithm. In addition, we propose a near-optimal approximation algorithm to solve the general scheduling problem. We present theoretical analysis, simulation, and actual implementation in a mobile TV testbed to demonstrate the optimality, practicality, and efficiency of the proposed algorithms. |
January 26
|
Transversal structures on triangulations: a combinatorial study and algorithmic applications Abstract: We study the combinatorial properties and algorithmics of planar maps (planar graphs embedded in the plane). The techniques are illustrated on a particular family, namely plane triangulations with no separating triangle (called irreducible). These triangulations can be endowed with specific edge bicolorations, called transversal structures, which makes it possible to count the triangulations bijectively, generate them at random, encode them and draw them on a grid. Simulations indicate that the grid used by the drawing has with high probability a size close to 11n/27 * 11n/27, where n is the number of vertices. As we will see, this can be proved rigorously using tools from bijective combinatorics as well as analytic combinatorics. |
January 19**SUR 4040** |
Gr�bner bases of lattices, corner polyhedra, and integer programming Abstract: We discuss the paper of Sturmfels, Weismantel and Ziegler describing Connections between Gr�bner bases, corner polyhedra and integer progamming. |
November 26
|
Expressing the TSP and Matching Polytopes by Symmetric Linear Programs Abstract: We discuss Mihalis Yannakakis' famous result that the TSP and matching polytopes cannot be expressed by symmetric LPs of polynomial size. |
November 21**4:30 Friday** SUR 15-300 & IRMACS 10908 |
Some analogies between simplex and interior point methods Abstract: Linear optimization consists of maximizing, or minimizing, a linear function over a domain defined by a set of linear inequalities. The simplex and primal-dual interior point methods are currently the most computationally successful algorithms. While the simplex methods follow an edge path, the interior point methods follow the central path. In this talk we highlight links between the edge and central paths, and between the diameter of a polytope and the largest possible total curvature of the associated central path. We prove continuous analogues of two results of Holt-Klee, and Klee-Walkup. We construct a family of polytopes which attain the conjectured order of the largest total curvature, and we prove that the special case where the number of inequalities is twice the dimension is equivalent to the general case. This talk is based on joint-works with Tamas Terlaky, Lehigh University and Yuriy Zinchenko, University of Calgary. |
November 19
|
Linear Satisfiability Algorithm for 3CNF Formulas of Certain Signaling Networks Abstract: A simple model of signal transduction networks in molecular biology consists of CNF formulas with two and three literals per clause. A necessary condition for correctness of the network model is satisfiability of the formulas. Deciding satisfiability turns out to be NP-complete. However, for a subclass that still is of practical interest, a linear satisfiability algorithm and related characterization of unsatisfiability can be established. The subclass is a special case of so-called closed sums of CNF formulas. Computational Results on an extensive curated Human T-Cell Model are included. Joint work with Kathrin Niermann, Klaus Truemper and Robert Weismantel |
November 13**Thursday**
|
Comparing Two Systems: Beyond Common Random Numbers Abstract: Suppose one is comparing two closely related stochastic systems via simulation. Common-random numbers (CRN) involves using the same streams of uniform random variables on both systems to sharpen the comparison. One can view CRN as a special choice of copula that gives the joint distribution on the inputs to both systems. We discuss the possibility of using more general copulae, including simple examples that show how this can outperform CRN. Joint work with Sam Ehrlichman |
November 12
|
Appointment Scheduling with Discrete Random Durations and Applications Abstract: We present two papers and give an overview of the existing work. In the first paper*, we determine optimal appointment times for a given sequence of jobs (medical procedures) on a single processor (operating room, examination facility), to minimize the expected total underage and overage costs when each job has a random duration given by its discrete probability distribution. A simple condition on the cost rates implies that the objective function is submodular and L-convex. Then there exists an optimal appointment schedule that is integer and can be found in polynomial time. The second paper** is also concerned with the same appointment scheduling problem but without assuming the knowledge of the job duration distributions. Instead, information on the duration distributions may be obtained by random sampling. We show that the objective function is convex under a simple condition and characterize its subdifferential, and determine the number of independent samples required to obtain a provably near-optimal solution with high probability. There are many other challenging and important real-life applications for appointment scheduling including healthcare diagnostic operations (such as CAT scan, MRI) and physician appointments, as well as project scheduling, container vessel and terminal operations, gate and runway scheduling of aircrafts in an airport. *, ** Joint work with Maurice Queyranne (Sauder School of Business, UBC) |
October 3**Friday** Irmacs 10908 & SUR 14-400 |
Kirchberger's theorem, coloured and multiplied
|
September 24 |
Rational Generating Functions and Integer Programming Games Abstract: We explore the computational complexity of computing pure Nash equilibria for a new class of strategic games called integer programming games with difference of piecewise linear convex payoffs. Integer programming games are games where players' action sets are integer points inside of polytopes. Using recent results from the study of short rational generating functions for encoding sets of integer points pioneered by Alexander Barvinok, we present efficient algorithms for enumerating all pure Nash equilibria, and other computations of interest, such as the pure price of anarchy, and pure threat point, when the dimension and number of convex" linear pieces in the payoff functions are fixed. Sequential games where a leader is followed by competing followers (a Stackelberg--Nash setting) are also considered. (Joint work with Matthias Koeppe and Maurice Queyranne.) |
September 3 |
A robust approach to handle risk constraints in mid and long-term energy planning of hydro-thermal systems Abstract: We consider the optimal management of a hydro-thermal power system in the mid and long-term. This is a large-scale multistage stochastic linear program, often solved by combining sampling with dynamic programming, for example by stochastic dual dynamic programming techniques. However, such methodologies are no longer applicable, at least not in a straightforward manner, when the model is set in a risk-averse formulation. We propose to take into account risk by using a chance-constrained formulation of the energy-planning problem. In particular, for a given confidence level, and having critical minimum reservoirs levels at each time stage, reservoirs can be kept above the critical volume in a probabilistic manner. When the uncertainty in the problem-typically, the natural inflows- is represented by a periodic autoregressive stochastic process, the corresponding probabilistic constraints can be expressed as Conditional Value-at-Risk constraints and the chance-constrained problem is a linear program. Such linear program can be thought of as a robustified deterministic variant of the original stochastic programming problem. Several robust adaptive management strategies are then proposed and the methodology is applied to the Brazilian system. This is joint work with Vincent Guigues. |
2007-2008
April 16*1:30* |
Computational Convex Analysis and Applications Abstract: Computational Convex Analysis focuses on the practical computation of fundamental transforms arising in Convex Analysis. Currently two main frameworks exist for such Computer-Aided investigation: fast algorithms based on discretization enhanced with computational geometry algorithms, and the Piecewise Linear-Quadratic framework based on quadratic spline approximation. After recalling the current state of the art, we will illustrate where these algorithms appear in a wide variety of fields: thermodynamics, robot navigation, medical imaging, image processing, and network communications. |
April 3*Thurs. 4:30* *15-300* |
Value, Trading Strategies, Technology, and Financial Investment of Natural Gas Storage Assets Abstract: By valuing a natural gas storage facility as a portfolio of real options, we study the value, trading strategies, technology, and financial investment of the storage for a risk neutral natural gas marketer to trade gas in an energy commodity exchange (spot market). The storage facility under consideration can either represent a highly deliverable peak load storages or a lowly deliverable base load storages. Its operational flexibilities could be valued by the expected maximum profit earned from seasonal and daily trading. The primary focus is to optimize trading strategy to extract the embedded options value and assess the financial value of a storage contract to gain the whole or partial control of the facility over years. We show that the optimal trading policy will refill (withdraw) gas only when the market price exceeds (is exceeded by) the convenience yield, an amount of the value to keep gas underground, less (plus) the marginal operating cost. Moreover, we explore the contingency of the optimal gas trading policy on the prediction of the spot prices by characterizing when it is optimal to sell low and buy high and when, on the other hand, it is optimal to buy high and sell higher. Finally, we demonstrate that the analysis can be deployed to value firm storage contract, and related technological deployment and financial investment. |
April 2 |
A Faster and Simpler Approximation for Maximum Multicommodity Flow Abstract: We discuss the maximum multicommodity flow approximation algorithm Garg and Könemann describe in their paper, "Faster and Simpler Algorithms for Multicommodity Flows and other Fractional Packing Problems" (SIAM J. Comput. 37 [2007], no. 2). Their approach is to substitute shortest path computations for min-cost flow computations, which they apply to other related multicommodity flow problems. |
March 12 |
Normal Cones and Nonconvex Optimization Abstract: In constrained optimization, the normal cone is to the constraint set what derivatives are to the objective function. That is, normal cones provide first order information describing in a limiting sense how the constraint set behaves near the point of interest. Thus normal cones allow researchers to extend the notions and algorithms of unconstrained optimization (min{f (x)}) into a constrained setting (min{f (x) : x in S}). In this talk we discuss the basic definition of a normal cone both for convex and nonconvex constraint sets. We provide illustrative examples, and demonstrate how normal cones provide the tools necessary to develop constraint identification theory for nonconvex optimization problems. |
February 27 |
Recent developments on LQP-based numerical algorithm Abstract: The talk focuses on the computational aspects of the so-called Logarithmic-quadratics proximal (LQP) method, which was originally proposed by A. Auslender, et al. Some efficient LPQ-based numerical algorithms are presented for solving nonlinear complementarity problems and structured variational inequalities. These new algorithms are applied to solve some traffic equilibrium problems. |
February 13*Room 15-300* |
Primal-dual gradient methods for linear cone programming and applications Abstract: In this talk, we consider linear cone programming (LCP) problem, and propose general primal-dual convex (smooth and/or nonsmooth) minimization reformulations for it. We then discuss a class of gradient methods suitable for solving these reformulations including Nesterov's smooth method, Nesterov's smooth approximation scheme, and Nemirovski's prox-method. The iteration-complexity bounds of these methods applied to the aforementioned reformulations are derived. We also propose a variant of Nesterov's smooth (VNS) method that has clear advantages over Nesterov's smooth one. We then discuss the VNS method for solving three natural primal-dual convex smooth minimization reformulations of LCP problem. The associated iteration complexity bounds and overall arithmetic operation cost are estimated and compared. Finally, we apply the VNS method to solve MAXCUT, Lovász capacity, and several important large-scale problems arising in compressed sensing that are challenging to simplex and/or interior point methods. |
January 23*Room 15-300* |
Fraser Health uses Mathematical Programming to Plan its Inpatient Hospital Network Abstract: Fraser Health (FH), the largest and fastest growing health region in British Columbia, launched in 2005 the Acute Care Capacity Initiative (ACCI) to look forward into the future and identify the major challenges for the region in terms of capacity, from an evidence-based analytical approach. The ACCI focus was on projecting the population’s need for clinical services over the next 15 years, developing clinical service plans to address that need, and creating service mix/siting configurations for the hospitals in its network of care. This talk is focused on the clinical service mix/siting phase of the ACCI, with emphasis on the mathematical programming model used in the planning process to develop configuration scenarios for FH’s network of hospitals. |
November 28 |
Local Search with Constraint Propagation and Conflict-Based Heuristics Abstract: We will introduce a class of algorithms referred to as Hybrid Search Methods for solving Constraint Satisfaction problems. There are two major classes of algorithms used in CSP: constructive search and local search. Hybrid search algorithms combine the two methods in an effort to utilize their advantages. The proposed algorithm is experimented with using the Open Shop Problem and some numerical results are presented. |
November 21 |
Unified Canonical Duality Theory for Solving a Class of Nonconvex Problems with Applications in Integer Programming David Yang Gao, Virginia Tech |
November 14 |
Online Scheduling with Stochastic Methods Abstract: Online scheduling problems differ from their offline counterparts in that the problem data is not completely available from the beginning but rather revealed during execution. Generally, this involves new jobs/activities/requests unexpectedly arriving during execution that must now be considered. In addition to having to make scheduling decisions based upon future uncertainty, time constraints often leave one with little time to make a new decision. We discuss the stochastic methods proposed by Chang, Givan and Chong (2000) and Bent and Van Hentenryck (2004) for dealing with these issues. |
November 8*Thurs. 4pm* |
Advances in portfolio decision analysis Abstract: This seminar includes three presentations on the following subtopics: - Robust Portfolio Modelling (Juuso Liesiö, Pekka Mild, Ahti Salo) |
November 1*Thurs. 4pm* |
New Classification Methods for Cancer Diagnosis and Biomarker Discovery Abstract: Though many methods already exist for determination of markers and tumor diagnosis using micro-array data, more precise and accurate methods for feature analysis and selection as well as tumor classification are still needed. This talk will present an overview on the application of new approach based on data mining and machine learning tools for tumor classification and for the selection of marker genes. |
October 31 |
Minimal generating families of parametric discrete and polyhedral sets with applications to survivable network design Abstract: We consider the capacitated network design problem under arc-survivability constraints. We present a new method based on the study of the family of sets of feasible routings for fixed capacities and demands. It turns out that Minkowski sums play an important role in this setting and arc-survivability is easily modelled using non-linear, Minkowski-additive arc-survivability functionals. The crucial feature of this framework is that the non-linear survivability constraints can be exactly linearized whenever a finite integral generating set (with respect to Minkowski sums) is known. We present new algorithms to compute minimal integral generating sets (with respect to Minkowski sums) of families of discrete sets (for use with the model of integral flows) and parameterized polyhedra (for the model of fractional flows). |
October 24 |
Complexity of the simplex method for linear fractional assignment problem Abstract: In this talk we show that the complexity of simplex method for the linear fractional assignment problem (LFAP) is strongly polynomial. Although LFAP can be solved in polynomial time using various algorithms such as Newton's method or binary search, no polynomial time bound for the simplex method for LFAP was known. |
October 17 |
Semidefinite liftings for stable set and matching Abstract: This is a survey of the "lift and project" approach to solving integer programs. We focus on the semi-definite matrix relaxations proposed by Lovász and Schrijver and the examples of finding maximum stable sets and matchings in a graph. |
October 10 |
Packing T-joins Abstract: T-joins are a common generalization of st-paths and perfect matchings, and have proved to be an important structure in combinatorial optimization. Numerous classic results in graph theory, for instance Menger's theorem and Konig's theorem, may be viewed as giving optimal packings of T-joins in certain special cases. Here we present a couple of theorems (joint with Paul Seymour) which give fairly good packings of T-joins in general circumstances. |
October 3 |
Improved bounds for the symmetric rendezvous value on the line Abstract: A notorious open problem in the field of rendezvous search is to decide the rendezvous value of the symmetric rendezvous search problem on the line, when the initial distance apart between the two players is 2. We show that the symmetric rendezvous value is within the interval (4.1520, 4.2574), which considerably improves the previous best known result (3.9546, 4.3931). To achieve the improved bounds, we call upon results from absorbing Markov chain theory and mathematical programming theory - particularly fractional quadratic programming and semidefinite programming. Moreover, we also establish some important properties of this problem, which may be of independent interest and useful for resolving this problem completely. Finally, we conjecture that the symmetric rendezvous value is asymptotically equal to 4.25 based on our numerical calculations. |
September 19 |
Modeling and optimization of loan portfolios Abstract: This presentation is a very elementary introduction to credit risk modeling. We will describe how such a model can be calibrated against data, and how it can then be used to assess risk. We will also show how to formulate Markowitz type portfolio optimization which minimizes risk for a fixed expected loss. |
2006-2007
Spring 2007 |
|
April 16 |
The capacitated max k-cut problem Abstract: We consider a capacitated max k-cut problem in which a set of vertices is partitioned into k subsets. Each edge has a non-negative weight, and each subset has a possibly different capacity that imposes an upper bound on its size. The objective is to find a partition that maximizes the sum of edge weights across all pairs of vertices that lie in different subsets. We describe a local-search algorithm that obtains a solution with value no smaller that 1-1/k of the optimal solution value. This improves a previous bound of 1/2 for the max k-cut problem with fixed, though possibly different, sizes of subsets. |
April 2 |
Robust discrete optimization and network flows Abstract: In this talk we will discuss the paper Robust discrete optimization and network flows, by Bertimas and Sim. The authors proposed an approach to address data uncertainty for discrete optimization and network flow problems that allows controlling the degree of conservatism of the solution, and is computationally tractable. In particular, when uncertain data appears in both the cost coefficients and the the constraints of an integer programming problem, the robust counterpart will result in a solution with probabilistic bounds on constraint violation; when only the cost coefficients are subject to uncertainty and the problem is a 0-1 discrete optimization problem on n variables, then the robust counterpart will be solved by solving at most n+1 instances of the original problem. Polynomially solvable cases and approximations will be analyzed. Finally, the paper proposed an algorithm for robust network flows that solves the robust counterpart by solving a polynomial number of nominal minimum cost flow problems in a modified network. |
March 26 |
Polynomial Time Approximation Schemes for Euclidean Traveling Salesman and Other Geometric Problems Abstract: For every fixed c > 1 and any given n nodes in the plane, we can find a (1 + 1/c) approximation to the optimum TSP tour in O(n (log n)^O(c)) time. The previous best approximation algorithm (due to Christofides) achieves a 3/2 approximation in polynomial time. The problem of whether a polynomial time approximation scheme for Euclidean TSP exists was open prior to the Arora's paper. We will present this TSP polynomial approximation scheme and we will also give similar polynomial approximation schemes for a few other NP-hard Euclidean problems such as Minimum Steiner Tree, k-TSP and k-MST. |
March 19 |
On Goemans's and Williamson's MAXCUT algorithm Abstract: This will be an expository talk on the seminal work of Goemans and Williamson towards using semidefinite programming to develop improved approximation algorithms for combinatorial optimization problems, particularly MAX CUT and 2SAT. I will introduce/review semidefinite programming and then outline how this can be used to give a polynomial time randomized algorithm which finds a maximum cut with expected value at least 0.87856 times the optimal weight. Previous to this work, the best known approximation guarantee was only 0.5 + o(n). We will then examine a similar algorithm for 2SAT. |
March 12 |
Constructing Incremental Sequences in Graphs Abstract: Given a weighted graph G=(V,E,w), we investigate the problem of constructing a sequence of n=|V| subsets of vertices M_1,...,M_n (called groups) with small diameters, where the diameter of a group is calculated using distances in G. The constraint on these n groups is that they must be incremental: M_1 is a subset of M_2 ... is a subset of M_n=V. The cost of a sequence is the maximum ratio between the diameter of each group M_i and the diameter of a group N_i with i vertices and minimum diameter. This quantity captures the impact of the incremental constraint on the diameters of the groups in a sequence. We give general bounds on the value of this ratio and we prove that the problem of constructing an optimal incremental sequence cannot be solved approximately in polynomial time with an approximation ratio less than 2 unless P = NP. Surprisingly, the related eccentricity problem is in P. We develop an optimal eccentricity algorithm and use it as the basis of a 4-approximation algorithm for the diameter problem. We show that the analysis of our algorithm is tight. Joint work with Ralf Klasing, Christian Laforest, and Nicolas Thibault |
March 5 |
Improving solution times in VLSI optimization problems Abstract: Advances in technology for the manufacturing of integrated circuits have resulted in extremely large, and time consuming, problems on how to lay out components for optimal circuit performance. The problem of wire layout (or routing) in VLSI design can be written as a large scale integer linear program with upper bound constraints. Due to the size of these programs, optimization by use of classical methods can be extremely time consuming. In this talk we will discuss some of the recent techniques and ideas which have been emerging on how to improve the time required to solve these highly structured linear programs. The majority of the talk will focus on applying novel preprocessing techniques to simplify the problem. However, if time permits, we will also discuss recent ideas in parameter tuning, and various methods of embedding the upper bound constraints into the constraint matrix in order to make better use of interior point algorithms. |
February 19 |
VLSN search: empirical analysis and theoretical reasoning Abstract: In this talk, we introduced VLSN search algorithms for solving discrete optimization problems. VLSN algorithms are precisely local search schemes using neighborhoods of very large size. We discuss theoretical analysis of algorithms in terms of associated neighborhood size or dominated solutions. Special neighborhoods are introduced that can be searched using matching problems or simpler integer programs. Results of extensive experimental analysis are discussed using the generalized assignment problem and its variations. |
February 12 |
A New Cone Programming Approach for Robust Portfolio Selection Abstract: The robust portfolio selection problems have recently been studied by several researchers. In their work, the ``separable'' uncertainty sets of the problem parameters (e.g., mean and covariance of the random asset returns) were considered. These uncertainty sets share two common drawbacks: i) the actual confidence level of the uncertainty set is unknown, and it can be much higher than the desired one; and ii) the uncertainty set is fully or partially box-type. The consequence of these drawbacks is that the resulting robust portfolio can be too conservative and moreover, it is usually highly non-diversified as observed in computational experiments. To combat these drawbacks, we consider a factor model for the random asset returns. For this model, we introduce a ``joint'' ellipsoidal uncertainty set for the model parameters and show that it can be constructed as a confidence region associated with a statistical procedure applied to estimate the model parameters for any desired confidence level. We further show that the robust maximum risk-adjusted return problem with this uncertainty set can be reformulated and solved as a cone programming problem. Some computational experiments are performed to compare the performances of the robust portfolios corresponding to our ``joint'' uncertainty set and Goldfarb and Iyengar's ``separable'' uncertainty set. We observe that our robust portfolio has much better performance than Goldfarb and Iyengar's in terms of wealth growth rate and transaction cost, and moreover, our robust portfolio is fairly diversified, but Goldfarb and Iyengar's is highly non-diversified. |
February 5 |
Introduction to robust optimization |
Fall 2006 |
|
December 11 |
Dynamic SPECT reconstruction using Kalman algorithm Abstract: Single photon emission computed tomography (SPECT) is a nuclear medicine imaging technique that is extensively used for clinical diagnosis. Classical SPECT reconstruction algorithms assume that the activity does not vary in time. This is not always true in practice. For instance, when we study Teboroxime cardiac images, the activity changes in time. Thus arises the need of exploring dynamic SPECT. In this talk, I will explore a Kalman reconstruction approach to estimate the time varying activity. While other methods assume a priori knowledge about the activity behavior, Kalman assumes little. We formulate a state-space model of the problem which we solve using the optimal Kalman filter and smoother. Numerical results are provided. |
December 4 |
Optimization algorithms for dynamic SPECT image reconstruction Abstract: Nuclear medicine is a medical specialty that aids diagnosis as well as treatment planning and monitoring by producing noninvasive, diagnostic images using radiopharmaceutical. In dynamic SPECT we study the dynamics of physiological processes and biochemical functions of living organism. One of the problems we face is the reconstruction of time activity curve. Because the emitted activities are time dependent, classical Expectation Maximization method fails to reconstruct a dynamic image. In this talk, I will present new reconstruction algorithms which preserve the shape of the time activity curve for each voxel. The proposed methods take into account information about the dynamic of the activity as a linear inequality constraints. We consider two classes of reconstruction method. The first class is a interior gradient iterative algorithm based on Bregman generalized projection. The second is a Newton based method and uses a line search to ensure global convergence. Numerical experiments verify the effectiveness of the algorithms. |
November 27 |
Allocating Resources to Control Infectious Diseases by Margaret L. Bandeau Karel Casteels, SFU Surrey |
November 20 |
Colourful linear programming Abstract: We study a colourful generalization of the linear programming feasibility problem, comparing the algorithms introduced by Barany and Onn with new methods. We perform benchmarking on generic and ill-conditioned problems, as well as recently introduced highly structured problems. We show that some algorithms can lead to cycling or slow convergence, but we provide extensive numerical experiments which show that others perform much better than predicted by complexity arguments. We conclude that an effective method in practice is a proposed multi-update algorithm. |
November 13 |
Seminar cancelled – due to Remembrance Day |
November 6 |
Models for Kidney Allocation by Stefanos A. Zenois Randall Pyke, SFU Surrey |
November 1 |
Optimization and Highly Informative Graph Invariants Abstract: It is known that graph invariants, which contain a great quantity of information on graph structure (for example, spectral invariants), are obtained by solving some extremal problems on graphs. Recently, such highly informative graph invariants are applied in solving optimization problems on graphs (e.g., the travelling salesman problem (TSP)). Using these paradigms, several relations, interconnections and interactions between graph theory and mathematical programming are described. A model of TSP based on semidefinite programming and algebraic connectivity of graphs is described. A class of relaxations of this TSP model is defined and some solution techniques based on this class are proposed. Several examples of graph invariants defined by some kind of optimization tasks are also presented. Using several spectrally based graph invariants we treat the graph isomorphism problem. |
October 23 |
A primal-dual first-order method for cone programs Abstract: In this talk, we first review Nesterov's optimal methods for a class of smooth/non-smooth convex programs. A variant of his methods will be proposed. Based on these methods, we propose some primal-dual first-order methods for cone programs. Some preliminary computational results of our methods for randomly generated large-scale linear program and semidefinite program problems are then presented. We finally conclude that our methods are very promising for solving large-scale (dense) cone programs. Joint work with G. Lan and R. Monteiro, Georgia Tech. |
October 16 |
Supply Chain Management of Blood Banks by W.P. Pierskalla Ron Ferguson, IRMACS, SFU |
October 2 |
The proximal average Joint works with Yves Lucet (UBC Okanagan) and Mike Trienis (UBC Okanagan) |
September 25 |
Location of health care facilities by Mark S. Daskin and Latoya K. Dean Arman Kaveh, SFU Surrey |
September 18 |
Benchmarking using DEA: The case of mental health organizations by Y.A Ozcan, E. Merwin, K. Lee and J.P. Morrissey Annie Ruonan Zhang, SFU Surrey |
September 11 |
Capacity planning and management in hospitals by Linda V. Green Dan Benvenuti, SFU Surrey |
2005-2006
Summer 2006 |
|
August 10 |
Health care delivery: current problems and future challenges Snezana Mitrovic-Minic, SFU Surrey |
August 3 |
Applying systematic local search to job shop scheduling problems Lei Duan, School of Computing Science, SFU |
July 18 |
A new approach to interrelated two way clustering of gene expression data Distinguished Speaker: Prof. B. Chandra, Head Department of Mathematics, Indian Institute of Technology, Delhi |
July 13 |
Neighborhood search heuristics Abraham P. Punnen, SFU Surrey |
July 6 |
An efficient variable neighborhood search for the generalized assignment problem Snezana Mitrovic-Minic, SFU Surrey |
June 29 |
Using group theory to construct and characterize metaheuristic search neighborhoods by Bruce W. Colleti and J. Wesley Barnes Sandy Rutherford, IRMACS , SFU |
June 8 |
Controlled pool maintenance for metaheuristics by Peter Greistorfer and Stefan Vos Lei Duan, School of Computing Science, SFU |
June 1 |
Logistics management: An opportunity for metaheuristics by Helena R. Lourenco Arman Kaveh, SFU Surrey |
May 25 |
On the integration of metaheuristic strategies in constrained programming by Mauro Dell’Amico and Andrea Lodi Annie Ruonan Zhang, SFU Surrey |
May 15 |
Lessons from applying and experimenting with scatter search by Manuel Laguna and Vinicius A. Armentano Dan Benvenuti, SFU Surrey |
May 8 |
Scatter Search vs. Genetic Algorithms by Rafael Marti, Manuel Laguna and Vicente Campos Ron Ferguson, IRMACS, SFU |
Spring 2006 |
|
April 3 |
Second level local optimization approach to employee timetabling by D. Bokal, G. Fijavz, D. Vodopivec, and S. Pintar Drago Bokal, University of Ljubljana, Slovenia |
March 27 |
A Very Fast Tabu Search Algorithm for Job Shop Problem – Part 2 by Józef Grabowski and Mieczyslaw Wodecki Sandy Rutherford, IRMACS, SFU |
March 20 |
Some New Ideas in Tabu Search for Job Shop Scheduling by Eugeniusz Nowicki and Czeslaw Smutnicki Lei Duan, School of Computing Science, SFU |
March 13 |
A Tabu Heuristic for the Uncapacitated Facility Location Problem by Minghe Sun Randall Pyke, SFU Surrey |
March 6 |
Review session |
February 27 |
Tabu Search Heuristics for the Vehicle Routing Problem by Jean-François Cordeau and Gilbert Laporte Snezana Mitrovic-Minic, SFU Surrey |
February 20 |
Cancelled due to the University Reading Break |
February 13 |
A Very Fast Tabu Search Algorithm for Job Shop Problem by Józef Grabowski and Mieczyslaw Wodecki Sandy Rutherford, IRMACS, SFU |
February 6 |
Scatter Search Methods for the Covering Tour Problem by Roberto Baldacci, Marco A. Boschetti, Vittorio Maniezzo and Marco Zamboni Annie Ruonan Zhang, SFU Surrey |
January 27 |
A Multistart Scatter Search Heuristic for Smooth NLP and MINLP Problems by Zsolt Ugray, Leon Lasdon, John C. Plummer, Fred Glover, Jim Kelly and Rafael Martí Dan Benvenuti, SFU Surrey |
January 23 |
A Scatter Search Tutorial for Graph-Based Permutation Problems by César Rego and Pedro Leão Snezana Mitrovic-Minic, SFU Surrey |