Cognitive science and artificial intelligence (AI) have a long-standing shared history. Early research in AI was inspired by human intelligence and shaped by cognitive scientists (e.g., Elman, 1990; Rumelhart and McClelland, 1986). At the same time, efforts in understanding human learning and processing used methods and data borrowed from AI to build cognitive models that mimicked human cognition (e.g., Anderson, 1975; Tenenbaum et al., 2006; Lieder & Griffiths, 2017; Dupoux, 2018). In the last five years the field of AI has grown rapidly due to the success of large-scale deep learning models in a variety of applications (such as speech recognition and image classification). Interestingly, algorithms and architectures in these models are often loosely inspired by natural forms of cognition (such as convolutional architectures and experience replay; e.g. Hassabis et al., 2017). In turn, the improvement of these algorithms and architectures enabled more advanced models of human cognition that can replicate, and therefore enlighten our understanding of, human behavior (Yamins & DiCarlo, 2016; Fan et al., 2018; Banino et al., 2018; Bourgin et al., 2019). Empirical data from cognitive psychology has also recently played an important role in measuring how current AI systems differ from humans and in identifying their …
Artificial intelligence has invaded the agriculture field during the last few years. From automatic crop monitoring via drones, smart agricultural equipment, food security and camera-powered apps assisting farmers to satellite image based global crop disease prediction and tracking, computer vision has been a ubiquitous tool. This workshop aims to expose the fascinating progress and unsolved problems of computational agriculture to the AI research community. It is jointly organized by AI and computational agriculture researchers and has the support of CGIAR, a global partnership that unites international organizations engaged in agricultural research. The workshop, will feature invited talks, panels and discussions on gender and agriculture in the digital era and AI for food security. It will also host and fund two open, large-scale competitions with prizes as well as a prototyping session.
Neural Architecture Search (NAS) can be seen as the logical next step in automating the learning of representations. It follows upon the recent transition from manual feature engineering to automatically learning features (using a fixed neural architecture) by replacing manual architecture engineering with automated architecture design. NAS can be seen as a subfield of automated machine learning and has significant overlap with hyperparameter optimization and meta-learning. NAS methods have already outperformed manually designed architectures on several tasks, such as image classification, object detection or semantic segmentation. They have also already found architectures that yield a better trade-off between resource consumption on target hardware and predictive performance. The goal of this workshop is to bring together researchers from industry and academia that focus on NAS. NAS is an extremely hot topic of large commercial interest, and as such has a bit of a history of closed source and competition. It is therefore particularly important to build a community behind this research topic, with collaborating researchers that share insights, code, data, benchmarks, training pipelines, etc, and together aim to advance the science behind NAS.
Differential equations and neural networks are not only closely related to each other but also offer complementary strengths: the modelling power and interpretability of differential equations, and the approximation and generalization power of deep neural networks. The great leap forward in machine learning empowered by deep neural networks has been primarily relying on the increasing amounts of data coupled with modern abstractions of distributed computing. When the models and problems grow larger and more complex, the need for ever larger datasets becomes a bottleneck.
Differential equations have been the principled way to encode prior structural assumptions into nonlinear models such as deep neural networks, reducing their need for data while maintaining the modelling power. These advantages allow the models to scale up to bigger problems with better robustness and safety guarantee in practical settings.
While progress has been made on combining differential equations and deep neural networks, most existing work has been disjointed, and a coherent picture has yet to emerge. Substantive progress will require a principled approach that integrates ideas from the disparate lens, including differential equations, machine learning, numerical analysis, optimization, and physics.
The goal of this workshop is to provide a forum where theoretical and experimental researchers …
ML-IRL will focus on the challenges of real-world use of machine learning and the gap between what ML can do in theory and what is needed in practice. Given the tremendous recent advances in methodology from causal inference to deep learning, the strong interest in applications (in health, climate and beyond), and discovery of problematic implications (e.g. issues of fairness and explainability) now is an ideal time to examine how we develop, evaluate and deploy ML and how we can do it better. We envision a workshop that is focused on productive solutions, not mere identification of problems or demonstration of failures.
Overall, we aim to examine how real-world applications can and should influence every stage of ML, from how we develop algorithms to how we evaluate them. These topics are fundamental for the successful real-world use of ML, but are rarely prioritized. We believe that a workshop focusing on these issues in a domain independent way is a necessary starting point for building more useful and usable ML. We will have speakers and participants representing all core topics (developing novel algorithms that work in the real world, specific applications and how we can learn from them, human factors and …
Climate change is one of the greatest problems society has ever faced, with increasingly severe consequences for humanity as natural disasters multiply, sea levels rise, and ecosystems falter. Since climate change is a complex issue, action takes many forms, from designing smart electric grids to tracking greenhouse gas emissions through satellite imagery. While no silver bullet, machine learning can be an invaluable tool in fighting climate change via a wide array of applications and techniques. These applications require algorithmic innovations in machine learning and close collaboration with diverse fields and practitioners. This workshop is intended as a forum for those in the machine learning community who wish to help tackle climate change.
As ML systems are pervasively deployed, security and privacy challenges became central to their design. The community has produced a vast amount of work to address these challenges and increase trust in ML. Yet, much of the work concentrates on well-defined problems that enable nice tractability from a mathematical perspective but are hard to translate to the threats that target real-world systems.
This workshop calls for novel research that addresses the security and privacy risks arising from the deployment of ML, from malicious exploitation of vulnerabilities (e.g., adversarial examples or data poisoning) to concerns on fair, ethical and privacy-preserving uses of data. We aim to provide a home to new ideas “outside the box”, even if proposed preliminary solutions do not match the performance guarantees of known techniques. We believe that such ideas could prove invaluable to more effectively spur new lines of research that make ML more trustworthy.
We aim to bring together experts from a variety of communities (machine learning, computer security, data privacy, fairness & ethics) in an effort to synthesize promising ideas and research directions, as well as foster and strengthen cross-community collaborations. Indeed, many fundamental problems studied in these diverse areas can be broadly recast …
Machine learning has enabled significant improvements in many areas. Most of these ML methods are based on inferring statistical correlations, they can become unreliable where spurious correlations present in the training data do not hold in the testing setting. One way of tackling this problem is to learn the causal structure of the data generating processing (causal models). The general problem of causal discovery requires performing all interventions on the model. However, this may be too expensive and/or infeasible in real environments: understanding how to most efficiently intervene in the environment in order to uncover the most amount of information is therefore a necessary requirement to be able to uncover causal information in real-world applications. In this workshop, we investigate a few key questions or topics.
- What is the role of an underlying causal model in decision making?
- What is the difference between a prediction that is made with a causal model and one made with a non‐causal model?
- What is the role of causal models in decision-making in real-world settings, for example in relation to fairness, transparency, and safety?
- The way current RL agents explore environments appears less intelligent than the way human learners explore. …
Recent work has demonstrated that current reinforcement learning methods are able to master complex tasks given enough resources. However, these successes have mainly been confined to single and unchanging environments. By contrast, the real world is both complex and dynamic, rendering it impossible to anticipate each new scenario. Many standard learning approaches require tremendous resources in data and compute to re-train. However, learning also offers the potential to develop versatile agents that adapt and continue to learn across environment changes and shifting goals and feedback. To achieve this, agents must be able to apply knowledge gained in past experience to the situation at hand. We aim to bring together areas of research that provide different perspectives on how to extract and apply this knowledge.
The BeTR-RL workshop aims to bring together researchers from different backgrounds with a common interest in how to extend current reinforcement learning algorithms to operate in changing environments and tasks. Specifically, we are interested in the following lines of work: leveraging previous experience to learn representations or learning algorithms that transfer to new tasks (transfer and meta-learning), generalizing to new scenarios without any explicit adaptation (multi-task and goal-conditioned RL), and learning new capabilities while retaining the …
Earth sciences or geosciences encompasses understanding the physical characteristics of our planet, including its lithosphere, hydrosphere, atmosphere and biosphere, applying all fields of natural and computational sciences. As Earth sciences enters the era of increasing volumes and variety of geoscientific data from sensors, as well as high performance computing simulations, machine learning methods are poised to augment and in some cases replace traditional methods. Interest in the application of machine learning, deep learning, reinforcement learning, computer vision and robotics to the geosciences is growing rapidly at major Earth science and machine learning conferences.
Our workshop proposal AI for Earth sciences seeks to bring cutting edge geoscientific and planetary challenges to the fore for the machine learning and deep learning communities. We seek machine learning interest from major areas encompassed by Earth sciences which include, atmospheric physics, hydrologic sciences, cryosphere science, oceanography, geology, planetary sciences, space weather, geo-health (i.e. water, land and air pollution), volcanism, seismology and biogeosciences. We call for papers demonstrating novel machine learning techniques in remote sensing for meteorology and geosciences, generative Earth system modeling, and transfer learning from geophysics and numerical simulations and uncertainty in Earth science learning representations. We also seek theoretical developments in interpretable machine …
Healthcare is under significant pressure: costs are rising, populations are aging, lifestyles are becoming more sedentary, and critically we lack experts to meet the rising demand. In addition, in under-developed countries healthcare quality remains limited. Meanwhile, AI has shown great promise for healthcare applications, and the digitisation of data and use of electronic health records is becoming more widespread. AI could play a key role in enabling, democratising and upholding high standards of healthcare worldwide, assisting health professionals to make decisions faster, more accurately and more consistently.
However, so far, the adoption of AI in real-world healthcare applications has been slow relative to that in domains such as autonomous driving. In this workshop, we aim to highlight recent advances and the potential opportunities, via a series of talks and panel discussion. We are supported by local institutes in Ethiopia, which besides being the ICLR host country, is a prime example of a country that potentially stands to gain much from the application of AI to healthcare. We invite technical papers and white papers addressing challenges which are important to the real-world deployment of AI in healthcare.
The rise in ML community efforts on the African continent has led to a growing interest in Natural Language Processing, particularly for African languages which are typically low resource languages. This interest is manifesting in the form of national, regional, continental and even global collaborative efforts to build corpora, as well as the application of the aggregated corpora to various NLP tasks.
This workshop therefore has several aims;
1) to showcase this work being done by the African NLP community and provide a platform to share this expertise with a global audience interested in NLP techniques for low resource languages
2) to provide a platform for the groups involved with the various projects to meet, interact, share and forge closer collaboration
3) to provide a platform for junior researchers to present papers, solutions, and begin interacting with the wider NLP community
4) to present an opportunity for more experienced researchers to further publicize their work and inspire younger researchers through keynotes and invited talks
According to the World Health Organization (WHO), cancer is the second leading cause of death globally, and was responsible for an estimated 9.6 million deaths in 2018. Approximately 70% of deaths from cancer occur in low- and middle-income countries (LMIC), in large part due to lack of proper access to screening, diagnosis and treatment services. As the economic impact of cancer increases, disparities in diagnosis and treatment options prevail. Recent advances in the field of machine learning have bolstered excitement for the application of assistive technologies in the medical domain, with the promise of improved care for patients. Unfortunately, cancer care in LMIC faces a very different set of challenges, unless focused efforts are made to overcome these challenges, cancer care in these countries will be largely unaffected. The purpose of this workshop is to bring together experts in machine learning and clinical cancer care to facilitate discussions regarding challenges in cancer care and opportunities for AI to make an impact. In particular, there is an immense potential for novel representation learning approaches to learn from different data modalities such as pathology, genomics, and radiology. Studying these approaches have the potential to significantly improve survival outcomes and improve the lives …
The constant progress being made in artificial intelligence needs to extend across borders if we are to democratize AI in developing countries. Adapting the state-of-the-art (SOTA) methods to resource constrained environments such as developing countries is challenging in practice. Recent breakthroughs in natural language processing (NLP), for instance, rely on increasingly complex and large models (e.g. most models based on transformers such as BERT, VilBERT, ALBERT, and GPT-2) that are pre-trained in on large corpus of unlabeled data. In most developing countries, low/limited resources means hard path towards adoption of these breakthroughs. Methods such as transfer learning will not fully solve the problem either due to bias in pre-training datasets that do not reflect real test cases in developing countries as well as the prohibitive cost of fine-tuning these large models. Recent progress with focus given to ML for social good has the potential to alleviate the problem in part. However, the themes in such workshops are usually application driven such as ML for healthcare and for education, and less attention is given to practical aspects as it relates to developing countries in implementing these solutions in low or limited resource scenarios. This, in turn, hinders the democratization of AI …
In this ICLR workshop, we will explore the intersection of big science and AI and the changing nature of good fundamental research in the era of pervasive AI. The workshop will also examine how AI can enhance the social good of Big Science, relevant for the African continent given that the Square Kilometre Array (SKA), which will be one of the largest astronomical endeavors ever undertaken, will be majority hosted in Africa. In addition this workshop aims to stimulate discussion across astronomy, cosmology and particle physics.