Time series data are ubiquitous. In domains as diverse as finance, retail, entertainment, transportation and health care, we observe a fundamental shift away from parsimonious, infrequent measurement to nearly continuous monitoring and recording. Recent
advances in diverse sensing technologies, ranging from remote sensors to wearables and social sensing, are generating a rapid growth in the size and complexity of time series archives. Thus, although time series analysis has been
studied extensively, its importance only continues to grow. What is more, modern time series data pose significant challenges to existing techniques (e.g., irregular sampling in hospital records and spatiotemporal structure in
climate data). Finally, time series mining research is challenging and rewarding because it bridges a variety of disciplines and demands interdisciplinary solutions. Now is the time to discuss the next generation of temporal mining
algorithms. The focus of MiLeTS workshop is to synergize the research in this area and discuss both new and open problems in time series analysis and mining. The solutions to these problems may be algorithmic, theoretical, statistical,
or systems-based in nature. Further, MiLeTS emphasizes applications to high impact or relatively new domains, including but not limited to biology, health and medicine, climate and weather, road traffic, astronomy, and energy.
The MiLeTS workshop will discuss a broad variety of topics related to time series, including:
08:00-08:10 Opening remarks
08:10-08:50 Keynote Talk
8:50-9:30 Keynote Talk
9:30-10:00 Coffee Break
10:00-10:40 Keynote Talk
10:40-12:00 Contributed Talks
12:00-13:00 Lunch Break
13:00-13:40 Keynote Talk
13:40-14:20 Keynote Talk
14:30-15:00 Keynote Talk
15:00-15:30 Coffee Break & Poster Session
15:30-15:45 Poster Session
15:45-16:55 Panel discussion
16:50-17:00 Concluding Remarks
In many domains, including healthcare, biology, and climate science, time series are irregularly sampled with varying time intervals between successive readouts and different subsets of variables (sensors) observed at different time points. For example, sensors' observations might not be aligned, time intervals among adjacent readouts can vary across sensors, different samples can have varying numbers of readouts recorded at different times. While machine learning methods usually assume fully observable and fixed-size inputs, irregularly sampled time series raise considerable challenges. In this talk, I will introduce Raindrop, a graph neural network that embeds irregularly sampled and multivariate time series while also learning the dynamics of sensors purely from observational data. Raindrop represents every sample as a separate graph, where nodes indicate sensors, and time-varying dependencies between sensors are modeled by a novel message passing operator. I will describe applications of Raindrop to classify time series and interpret temporal dynamics on three healthcare and human activity datasets. Raindrop shows superiority on multiple setups, including challenging leave-sensor-out settings.
Marinka Zitnik (https://zitniklab.hms.harvard.edu) is an Assistant Professor at Harvard with appointments in the Department of Biomedical Informatics, Broad Institute of MIT and Harvard, and Harvard Data Science. Her research investigates applied machine learning, focusing on networked systems that require infusing structure and knowledge. Dr. Zitnik has published extensively in ML venues and leading scientific journals. She is an ELLIS Scholar in the European Laboratory for Learning and Intelligent Systems (ELLIS) Society and a member of the Science Working Group at NASA Space Biology. Her research won best paper and research awards from the International Society for Computational Biology, Bayer Early Excellence in Science Award, Amazon Faculty Research Award, Roche Alliance with Distinguished Scientists Award, Rising Star Award in EECS, and Next Generation in Biomedicine Recognition, being the only young scientist with such recognition in both EECS and Biomedicine.
Chandan Reddy is a Professor in the Department of Computer Science at Virginia Tech. He received his Ph.D. from Cornell University and M.S. from Michigan State University. His primary research interests are Machine Learning and Data Analytics with applications to healthcare, e-commerce, and transportation. His research has been funded by NSF, NIH, DOE, DOT and various industries. He has published over 150 peer-reviewed articles in leading conferences and journals. He received several awards for his research work including the best paper awards at SIGKDD and ICDM conferences, and was a finalist of the INFORMS Franz Edelman Award Competition in 2011. He is currently serving on the editorial boards of ACM TKDD, ACM TIST, and IEEE Big Data journals. He is a senior member of the IEEE and a distinguished member of the ACM. More information about his work is available at his homepage: http://www.cs.vt.edu/~reddy/
Head of Data Science and Systems Security Department
NEC Laboratories America
With the decreasing hardware cost and increasing demand for autonomic management, many physical systems nowadays are equipped with a large network of sensors, which generate a huge amount of time series data every day. Due to the high system complexity, however, those time series contain heterogeneous dependencies among different parts across the system and are mixed with noise and operational patterns. It is a challenge to correctly discover the system operational status and healthiness from the collected data. In this talk, I will share our work in transforming the time series data into insight for system’s productivity, reliability and safety. I will start with discovering the “invariants” from massive time series and leveraging those “invariants” for system management. I will then present the deep learning based time series retrieval to digest and compress historical time series for explaining current observations. Finally, I will talk about our work in leveraging the attention mechanism and the prototype learning to discover important patterns from time series. If time is allowed, I will also introduce our recent work that incorporates text data to further interpret the behavior of time series.
Dr. Haifeng Chen is heading the Data Science and System Security Department at NEC Laboratories America in Princeton, New Jersey. He received the bachelor’s and master’s degrees in automation from Southeast University China, and the Ph.D degree in computer engineering from Rutgers University in 2004. He and his team members are working on various topics related to big data analytics, AI, software and system security, smart service and platforms. Dr. Chen has served in the program committee for a number of top AI conferences, and has been in the panel of National Science Foundation (NSF) programs. He is a member of the school of Systems and Enterprises Advisory Board in Stevens Institute of technology, New Jersey. Dr. Chen has co-authored more than a hundred conference/journal publications including the best paper runner-up at SigKDD’16, and has over 70 patents granted. Most of his research led to advanced solutions and products for various industrial domains including power plants, satellite, financial, retail, and so on.
Director of Modeling and Simulation, Julia Computing, Director of Scientific Research, Pumas-AI, Research Affiliate, Co-PI of the Julia Lab Massachusetts Institute of Technology
Differentiable simulation techniques are the core of scientific machine learning methods which are used in the automatic discovery of mechanistic models through infusing neural network training into the simulation process. In this talk we will start by showcasing some of the ways that differentiable simulation is being used, from discovery of extrapolatory epidemic models to nonlinear mixed effects models in pharmacology. From there, we will discuss the computational techniques behind the training process, focusing on the numerical issues involved in handling differentiation of highly stiff and chaotic systems. The viewers will leave with an understanding of how compiler techniques are being infused into the simulation stack to provide the future of differentiable simulation which merges machine learning with traditional biological and physical modeling.
Chris is the Director of Scientific Research at Pumas-AI, the Director of Modeling and Simulation at Julia Computing, Co-PI of the Julia Lab at MIT, and the lead developer of the SciML Open Source Software Organization. He is the lead developer of the Pumas project and has received a top presentation award at every ACoP in the last 3 years for improving methods for uncertainty quantification, automated GPU acceleration of nonlinear mixed effects modeling (NLME), and machine learning assisted construction of NLME models with DeepNLME. For these achievements, Chris received the Emerging Scientist award from ISoP. For his work in mechanistic machine learning, his work is credited for the 15,000x acceleration of NASA Launch Services simulations and recently demonstrated a 60x-570x acceleration over Modelica tools in HVAC simulation, earning Chris the US Air Force Artificial Intelligence Accelerator Scientific Excellence Award.
Staff Research Scientist and Manager
Google Cloud AI Research
With the growth of machine learning for structured data, the need for reliable model explanations is essential, especially in high-stakes applications. We introduce a novel framework, Interpretable Mixture of Experts (IME), that provides interpretability for structured data while preserving accuracy. IME consists of an assignment module and a mixture of interpretable experts such as linear models where each sample is assigned to a single interpretable expert. This results in an inherently-interpretable architecture where the explanations produced by IME are the exact descriptions of how the prediction is computed. In addition to constituting a standalone inherently-interpretable architecture, an additional IME capability is that it can be integrated with existing Deep Neural Networks (DNNs) to offer interpretability to a subset of samples while maintaining the accuracy of the DNNs. Experiments on various structured datasets demonstrate that IME is more accurate than a single interpretable model and performs comparably to existing state-of-the-art deep learning models in terms of accuracy while providing faithful explanations.
Sercan Arik is currently working as a Staff Research Scientist and Manager at Google Cloud AI Research. His current work is motivated by the mission of democratizing AI and bringing it to the most impactful use cases, from Healthcare, Finance, Technology, Retail, Media, Manufacturing, and many other industries. Towards this goal, he focuses on how to make AI more high-performance for the most-demanded data types, interpretable, trustable, data-efficient, robust and reliable. He led research projects that were launched as major Google Cloud products and yielded significant business impact. Before joining Google, he was a Research Scientist at Baidu Silicon Valley AI Lab. At Baidu, I have focused on deep learning research, particularly for applications in human-technology interfaces. He co-developed state-of-the-art speech synthesis, keyword spotting, voice cloning, and neural architecture search systems. Sercan Arik completed his PhD degree in Electrical Engineering at Stanford University.
In a typical business environment, with new product introductions, new datacenters starting, etc., time series are of uneven length, many are very short. Recurrent Neural Networks (RNNs) are well suited to learn from and forecast these kind of data sets, as in contradistinction to CNNs, Transformers, etc., they have memory. The talk will describe a basic well performing RNN system based on LSTMs, and then describe more advanced versions with improved cells and network architecture. Finally, a relatively simple modification will be described that allows to train an RNN to predict any quantile on demand, providing a kind of probabilistic forecast.
Slawek Smyl received M.Sc. degree in Physics from Jagiellonian University, Poland, and the M.Eng. degree in Information Technology from RMIT University, Australia. He is currently a Quantitative Engineer with Meta Technologies working in the area of time series forecasting. Mr. Smyl has ranked highly in forecasting competition: he won the Computational Intelligence in Forecasting International Time Series Competition 2016, got a third place in the Global Energy Forecasting Competition in 2017, and won the M4 Forecasting Competition in 2018.
FQFormer: A Fully Quantile Transformer for Time Series Forecasting Shayan Jawed and Lars Schmidt-Thieme.
Gaussian Processes for Hierarchical Time Series Forecasting Luis Roque, Carlos Soares and Luis Torgo.
Forecasting with Sparse but Informative Variables: A Case Study in Predicting Blood Glucose Harry Rubin-Falcone, Joyce Lee and Jenna Wiens.
Class-Specific Attention (CSA) for Time-Series Classification Yifan Hao, Huiping Cao, K. Selçuk Candan, Jiefei Liu and Huiying Chen.
Probabilistic Continuous-Time Whole-Graph Forecasting Zhihan Gao, Hao Wang, Yuyang Wang, Xingjian Shi and Dit-Yan Yeung.
Towards Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms Linbo Liu, Youngsuk Park, Trong Nghia Hoang, Hilaf Hasson and Jun Huan.
Shapelet-Based Counterfactual Explanations for Multivariate Time Series Omar Bahri and Soukaina Filali Boubrahimi.
One-Class Predictive Autoencoder Towards Unsupervised Anomaly Detection on Industrial Time Series Hongjing Zhang, Fangzhou Cheng and Aparna Pandey.
Domain-Aware ML-Driven Predictive Analytics for Real-Time Proliferation Detection in Urban Environments Ellyn Ayton, Sannisth Soni, Mark Bandstra, Brian Quiter, Reynold Cooper and Svitlana Volkova
Semi-unsupervised Learning for Time Series Classification Padraig Davidson, André Huhn, Michael Steininger, Anna Krause and Andreas Hotho
Domain Adaptation under Behavioral and Temporal Shifts for Natural Time Series Mobile Activity Recognition Garrett Wilson, Jana Doppa and Diane Cook
Time-Series Forecasting using Dynamic Graphs: Case Studies with Dyn-STGCN and Dyn-GWN on Finance and Traffic Datasets Shibal Ibrahim, Max R. Tell and Rahul Mazumder
Mining Multivariate Time-Series for Anomaly Detection in Mobile Networks. On the Usage of Variational Auto Encoders and Dilated Convolutions Gastón García, Sergio Martinez Tagliafico, Alicia Fernández, Gabriel Gómez, José Acuña and Pedro Casas
Long Range Capacity Planning Ali Jalali and Pranesh Vyas
corrector LSTM: built-in training data correction for improved time series forecasting Yassine Baghoussi, Carlos Soares and João Mendes Moreira
MQ-ReTCNN: Multi-Horizon Time Series Forecasting with Retrieval-Augmentation Sitan Yang, Carson Eisenach and Dhruv Madeka
Robust Time Series Dissimilarity Measure for Outlier Detection and Periodicity Detection Xiaomin Song, Qingsong Wen, Yan Li and Liang Sun
Recasting Self-Attention with Holographic Reduced Representations Mohammad Mahmudul Alam, Edward Raff, Tim Oates and James Holt
Deep Learning-based Block Maxima Distribution Predictor for Extreme Value Prediction Kaneharu Nishino, Ken Ueno and Ryusei Shingaki
Cross-Domain Graph Learning for Multivariate Time Series Forecasting Difan Zou, Ni Ma and Saher Esmeir
Representation Learning Using a Multi-Branch Transformer for Industrial Time Series Anomaly Detection Ruichuan Zhang, Fangzhou Cheng and Aparna Pandey
Large-Scale Enterprise Revenue Forecasting in Action Panpan Xu, Goktug T. Cinar, Ryan Burt, Jasleen Grewal, Anton Iakovlev, Michael Binder, Selvan Senthivel, Ruilin Zhang, Adrian Horvath, Miguel Romero Calvo and Lin Lee Cheong
MQTransformer: Multi-Horizon Forecasts with Context Dependent Attention and Optimal Bregman Volatility Kevin Chen, Lee Dicker, Carson Eisenach, Dhruv Madeka and Yagna Patel
Submissions should follow the SIGKDD formatting requirements (unless otherwise stated) and will be evaluated using the SIGKDD Research Track evaluation criteria. Preference will be given to papers that are reproducible, and authors are encouraged to share their data and code publicly whenever possible. Submissions are limited to be no more than 9 pages (suggested 4-8 pages), including references (all in a single pdf). All submissions must be in pdf format using the KDD main conference paper template (see: https://kdd.org/kdd2022/cfpResearch.html ). Submissions will be managed via the MiLeTS 2022 EasyChair website: https://easychair.org/conferences/?conf=milets2022.
Note on open problem submissions: In order to promote new and innovative research on time series, we plan to accept a small number of high quality manuscripts describing open problems in time series analysis and mining. Such papers should provide a clear, detailed description and analysis of a new or open problem that poses a significant challenge to existing techniques, as well as a thorough empirical investigation demonstrating that current methods are insufficient.
COVID-19 Time Series Analysis Special Track: The COVID-19 pandemic is impacting almost everyone worldwide and is expected to have life-altering short and long-term effects. There are many potential applications of time series analysis and mining that can contribute to understanding of this pandemic. We encourage submission of high quality manuscripts describing original problems, time series datasets, and novel solutions for time series analysis and forecasting of COVID-19.
The review process is single-round and double-blind (submission files have to be anonymized). Concurrent submissions to other journals and conferences are acceptable. Accepted papers will be presented as posters during the workshop and list on the website (non-archival/without proceedings). Besides, a small number of accepted papers will be selected to be presented as contributed talks.
Any questions may be directed to the workshop e-mail address: firstname.lastname@example.org.
Download Call for Papers here.
Paper Submission Deadline: May 26th June 1st, 2022, 11:59PM Alofi Time
Author Notification: June 20th June 27th, 2022
Camera Ready Version: July 2nd, 2022
Workshop: August 15th, 2022 8:00 AM - 5:00PM EDT
University of Maryland, Baltimore County
AWS AI Labs
University of Virginia
University of Connecticut
AWS AI Labs, Amazon
AWS AI Labs, Amazon
AWS AI Labs, Amazon
AWS AI Labs, Amazon