Autumn School MALVIC: Machine Learning and Vision for Industrial Applications
Bringing together pioneering international scientists in machine learning and computer vision.
Sist oppdatert: 22. okt. 2021
Machine learning and computer vision have the potential to significantly improve the automation and autonomy of many industrial applications (e.g. offshore, automotive, telecommunication, gaming, and multimedia) by enhancing the operational performance, decreasing cost related to manual operations, increasing benefits, minimizing losses, optimizing productivity and improving safety and security.
The goal of this Autumn School MALVIC is to bring together pioneering international scientists in machine learning and computer vision with both academia and practitioners from the industrial fields on a unique setting for the discussion and demonstration of practical, hands-on machine learning and vision research and development. Offshore industrial applications and industrial process scenarios are examples for the autumn school target.
Attendees of MALVIC will enjoy the following
World pioneer scientists giving lectures on computer vision and AI.
World companies (e.g. Google, Daimler, NVIDIA, Amazon) giving practical talks on Computer Vision and AI in practice
Business talks about few companies sharing their success stories on how did they grow and make money using computer vision and AI.
A surprise session that we are working on.
Join in on the open session on Tuesday 19 October
In an open round table we discuss the the status-quo of computervision and artificial intelligence and their potential trends for the future of our society. In this discussion, we are very honored to have with us
Professor Takeo Kanade: one of the top 10 impactful computer scientists in the world in the last century and the winner of Franklin award (2008) and Kyoto price (2016).
Anne-Marie Sassen the acting head of the Digital Transformation of Industrial Ecosystems at the European Commission.
Gal Chechik the director of AI research at NVIDIA, a leading company in computing systems.
Thomas Bäck: Industrial Optimization and the Search for New Algorithms
Direct global optimization algorithms based on some instance of evolutionary computation have shown big successes in a wide range of application domains, for example engineering design optimization. When problem dimensionality is small (n < 20), so-called efficient global optimization (EGO) is also a very suitable class of algorithms, and I will introduce a generalization of the concept of an acquisition function in EGO that automatically handles the exploration – exploitation tradeoff.
In automatic machine learning, the optimization of hyperparameters (also called the algorithm configuration problem) is currently of considerable interest. I will briefly explain this problem and then provide some examples illustrating that this task can be handled by direct global optimization algorithms as well. While it is commonly applied to machine learning algorithms, algorithm configuration for evolution strategies is a new application domain. I will give a simple example how a combinatorial design space of 4.608 configuration variants of evolution strategies can be created and investigated using data mining. This kind of “combinatorial algorithmics” provides an opportunity for discovering the unexplored areas of the optimization algorithm design space. Finally, I provide a quick idea of an extension of EGO for the combined algorithm selection and hyperparameter optimization (CASH) task in machine learning.
To conclude, I return to engineering design optimization tasks, one in wing design and one in ship design. Both are multi-objective, both use a variant of efficient global optimization, and the first focuses on modeling user preferences in objective space while the second learns internal models of the constraints using radial basis functions. Both aim at illustrating today’s requirements in engineering design applications.
Prof. Thomas Bäck, Professor at Leiden University (The Netherlands) and Chief Scientist at NORCE. He is head of the Natural Computing Research Group and Director of Education at the Leiden Institute of Advanced Computer Science (LIACS). He received his PhD in Computer Science from Dortmund University, Germany, in 1994. He has been Associate Professor of Computer Science at Leiden University since 1996 and full Professor for Natural Computing since 2002.
Horst Bischof: Understanding Activities in an Industrial Context
This talk will highlight some recent work in the area of understanding actions and human activities. Special emphasis will be devoted to sequence segmentation and recognition of complex (long-term) Activities and domain adaptation. Examples from real world applications will illustrate the presented methods.
Prof. Horst Bischof is vice rector for research and Professor at the Institute for Computer Graphics and Vision at the Graz University of Technology, Austria. He has more than 750 publications with notable works on object recognition, visual learning, on-line and life-long learning, motion and tracking, visual surveillance and biometrics and medical computer vision.
Daniel Cremers: Deep Visual SLAM
Visual Simultaneous Localization and Mapping (SLAM) is of utmost importance to autonomous systems and augmented reality. I will discuss direct methods for visual SLAM (LSD SLAM and DSO) that recover camera motion and 3D structure directly from brightness consistency thereby providing better performance in terms of precision and robustness compared to classical keypoint-based techniques.
Moreover, I will demonstrate how we can leverage the predictive power of self-supervised deep learning in order to significantly boost the performance of direct SLAM methods. The resulting methods D3VO allow us to track a single camera with a precision that is on par with state-of-the-art stereo-inertial odometry methods.
Lastly, I will introduce MonoRec - a deep network that can generate faithful dense reconstruction of the observed world from a single moving camera.
Prof. Daniel Cremers is Professor of Informatics and Mathematics at TU Munich and Germany. He is one of the leading experts in computer vision, machine learning & deep networks with focus on mathematical image analysis (segmentation, motion estimation, multiview reconstruction, visual SLAM). In December 2010 he was listed among "Germany's top 40 researchers below 40" (Capital). On March 1st 2016, Prof. Cremers received the Gottfried Wilhelm Leibniz Award, the biggest award in German academia.
Marius Leordeanu: Towards Unsupervised Learning in Space and Time: From Metric Depth Estimation and Semantic Segmentation to Complete 4D Scene Understanding
We address the exciting and very challenging problem of unsupervised visual learning in space and time, which has a tremendous impact in the current world of artificial intelligence and its applications to robotics and computer science. We will start by presenting our recent work on unsupervised monocular metric depth estimation and semi-supervised semantic segmentation, in the context of Unmanned Aerial Vehicles(UAVs). We will introduce efficient algorithms and novel datasets from UAV flights, captured in varied and complex European scenes. We will also present extensive comparisons to the state-of-the-art. Then we will move towards the more complex case of unsupervised multi-task learning of the visual 4D scene from as many interpretation views as possible.
At the core of our approach we have a self-supervised learning model based on automatically reaching consensus in a graph of neural networks. Each node in the graph is a scene interpretation layer, while each edge is a deep net that transforms one layer at one node into another from a different node. The edge networks are trained unsupervised on the pseudo-ground truth provided by consensus among multiple paths that reach the nets' start and end nodes. These paths act as ensemble teachers for any given edge and strong consensus is used for high-confidence supervisory signals. The unsupervised learning process is repeated over several generations, in which each edge becomes a "student" and also part of different ensemble "teachers" for training other students. By optimizing such consensus between different paths, the graph reaches consistency and robustness over multiple interpretations and generations, in the face of unknown labels. We will present several efficient strategies for constructing the unsupervised pseudo-labels from multiple graph paths and show that the multi-task graph structure as well as the consensus-finding procedure are essential factors.
Throughout the presentation we will balance the intuitions and theoretical insights with extensive tests in the real world of robotics and vision, such as visual learning for drones. We will show that the idea of pushing self-supervised learning towards the case of multiple-tasks, has the potential to improve robustness and move the field forward, towards the general case of unsupervised multi-interpretation learning in space and time.Prof. Marius Leordeanu,Professor, Politehnica University of Bucharest. He is also a Senior Scientist of the Romanian Academy (IMAR). He holds a PhD from the Robotics Institute of CMU and Bachelor’s in Computer Science and Mathematics in 2003, from Hunter College of the City University of New York.
Marius Leordeanu is Professor at the University Politehnica of Bucharest (UPB) and Senior Researcher at the Institute of Mathematics of the Romanian Academy (IMAR). Marius obtained his Bachelor's in Mathematics and Computer Science at Hunter College - City University of New York (2003) and PhD in Robotics at Carnegie Mellon University (2009). His research spans different areas in vision and learning, with main focus on unsupervised learning in the space-time domain, vision for drones and aerial scene understanding, optimization on graphs and neural nets, as well as relating vision to natural language. He coordinates several research groups, both in academia and industry, having strong collaborations on topics that range from general computer vision (e.g. Google, Bitdefender, NORCE) to specific applications for autonomous vehicles (e.g. ARNIA, Google, NORCE), the wood industry (Fordaq) and medical imaging (Siemens). For his work on graph matching and unsupervised learning he received the "Grigore Moisil Prize" in Mathematics (2014), the top award given by the Romanian Academy. In 2020 Marius published a book, Unsupervised Learning in Space and Time (Springer), which proposes a general unsupervised learning model that brings together the powers of graphs and deep neural networks.
Jürgen Schmidhuber: Modern Artificial Intelligence - 1980s-2021 and Beyond
Significant historic events appear to be occurring more frequently as time goes on. Interestingly, it seems like subsequent intervals between these events are shrinking exponentially by a factor of four. This process looks like it should converge around the year 2040.
The last of these major events can be said to have occurred around 1990 when the cold war ended, the WWW was born, mobile phones became mainstream, the first self-driving cars appeared, and modern AI with very deep neural networks came into being. In this talk, I'll focus on the latter, with emphasis on Metalearning since 1987 and what I call "the miraculous year of deep learning" which saw the birth of—among other things—(1) very deep learning through unsupervised pre-training, (2) the vanishing gradient analysis that led to the LSTMs running on your smartphones and to the really deep Highway Nets/ResNets, (3) neural fast weight programmers that are formally equivalent to what’s now called linear Transformers, (4) artificial curiosity for agents that invent their own problems (familiar to many nowadays in the form of GANs), (5) the learning of sequential neural attention, (6) the distilling of teacher nets into student nets, and (7) reinforcement learning and planning with recurrent world models. I’ll discuss how in the 2000s much of this has begun to impact billions of human lives, how the timeline predicts the next big event to be around 2030, what the final decade until convergence might hold, and what will happen in the subsequent 40 billion years. Take all of this with a grain of salt though.
Jürgen Schmidhuber, Co-Founder and Chief Scientist at NNAISENSE, Director of the AI Initiative at KAUST, Scientific Director, Swiss AI Lab, IDSIA, Lugano, Switzerland, Adjunct Professor of Artificial Intelligence, Faculty of Computer Science, University of Lugano (USI), Chief Scientific Advisor of the non-profit AI Research Institute (AIRI), Moscow.
Since age 15 or so, the main goal of professor Jürgen Schmidhuber has been to build a self-improving Artificial Intelligence (AI) smarter than himself, then retire. His lab's Deep Learning Neural Networks based on ideas published in the "Annus Mirabilis" 1990-1991 have revolutionised machine learning and AI. By the mid 2010s, they were on 3 billion devices, and used billions of times per day through users of the world's most valuable public companies, e.g., for greatly improved (CTC-LSTM-based) speech recognition on all Android phones, greatly improved machine translation through Google Translate and Facebook (over 4 billion LSTM-based translations per day), Apple's Siri and Quicktype on all iPhones, the answers of Amazon's Alexa, and numerous other applications. In 2011, his team was the first to win official computer vision contests through deep neural nets, with superhuman performance.
In 2012, they had the first deep NN to win a medical imaging contest (on cancer detection). All of this attracted enormous interest from industry. His research group also established the fields of mathematically rigorous universal AI and recursive self-improvement in metalearning machines that learn to learn (since 1987).
In 1990, he introduced unsupervised adversarial neural networks that fight each other in a minimax game to achieve artificial curiosity (GANs are a special case). In 1991, he introduced very deep learning through unsupervised pre-training, and neural fast weight programmers formally equivalent to what's now called linear Transformers. His formal theory of creativity & curiosity & fun explains art, science, music, and humor. He also generalized algorithmic information theory and the many-worlds theory of physics, and introduced the concept of Low-Complexity Art, the information age's extreme form of minimal art. He is recipient of numerous awards, author of over 350 peer-reviewed papers, and Chief Scientist of the company NNAISENSE, which aims at building the first practical general purpose AI. He is a frequent keynote speaker, and advising various governments on AI strategies.
Guy Theraulaz: The Collective Intelligence of Superorganisms
Several animal species living in groups or societies are able to display spectacular collective behaviors. This is the case of starlings flocks, which, at dusk, gather tens of thousands of individuals and engage in astonishing aerial choreographies. At another scale, social insects (ants, termites, certain species of wasps and bees) have developed amazing abilities to coordinate their activities. For the past thirty years, scientists have sought to unravel the mysteries of this collective intelligence. Such ability emerges from the interactions between individuals that allow these groups of animals to self-organize. Today, thanks to these studies, the analysis and modeling of these interactions, we have a better understanding the mechanisms that allow these social organisms to coordinate their movements, build complex nest architectures and collectively solve multiple problems
Guy Theraulaz is a senior research fellow at the National Center for Scientific Research CNRS) and an expert in the study of collective animal behaviors. He is also a leading researcher in the field of swarm intelligence, primarily studying social insects but also distributed algorithms, e.g. for collective robotics, directly inspired by nature.
His research focuses on the understanding of a broad spectrum of collective behaviors in animal societies by quantifying and then modeling the individual level behaviors and interactions, thereby elucidating the mechanisms generating the emergent, group-level properties.
He was one of the main characters of the development of quantitative social ethology and collective intelligence in France. He published many papers on nest construction in ant and wasp colonies, collective decision in ants and cockroaches, and collective motion in fish schools and pedestrian crowds. He has also coauthored five books, among which Swarm Intelligence: From Natural to Artificial Systems (Oxford University Press, 1999) and Self-organization in biological systems (Princeton University Press, 2001) that are now considered as reference textbooks.
René Vidal: Mathematics of Deep Learning
The past few years have seen a dramatic increase in the performance of recognition systems thanks to the introduction of deep networks for representation learning. However, the mathematical reasons for this success remain elusive. For example, a key issue is that the neural network training problem is non- convex, hence optimization algorithms may not return a global minima. In addition, the regularization properties of algorithms such as dropout remain poorly understood. The first part of this tutorial will overview recent work on the theory of deep learning that aims to understand how to design the network architecture, how to regularize the network weights, and how to guarantee global optimality. The second part of this tutorial will present an analysis of the optimization and regularization properties of dropout for matrix factorization. Examples from neuroscience and computer vision will also be presented.
Prof. René Vidal, Professor at JHU, USA, and Chief scientist at NORCE. He is the Herschel Seder Professor of Biomedical Engineering and the Inaugural Director of the Mathematical Institute for Data Science at The Johns Hopkins University. He has secondary appointments in Computer Science, Electrical and Computer Engineering, and Mechanical Engineering.
Danny Weyns: From Self-adaptation to Lifelong Computing
With the progressing digitalisation of our society, the demands on computing systems increase at incredible speed. This goes to the level that current human-based engineering of computing systems is coming close to the point where it will simply be no longer manageable. The first part of this tutorial will elaborate on the concept of self-adaptation that was initially proposed by IBM (in their program on autonomic computing) as “the only viable solution” to deal with the manageability problems of complex computing systems that face continuous change. We explain the basic principles of self-adaptation, elaborate on engineering approaches for its realisation, and illustrate these with examples. The second part of the tutorial will then argue why self-adaptation falls short to tackle the hard problems of future computing systems. We will make a case for "lifelong computing” - a newly proposed paradigm for the design and operation of computing systems. A lifelong computing system dynamically evolves its own architecture where the design choices are ultimately enacted by the system itself. This yields self-learning systems that autonomously handle changing conditions, both foreseen and unforeseen. The tutorial concludes with highlighting key challenges that we need to overcome to realize the lifelong computing paradigm.
Danny Weyns is professor at the Department of Computer Science at the Katholieke Universiteit Leuven Belgium. He is also affiliated with Linnaeus University in Sweden. His main research interests are in software engineering of self-adaptive systems, with a particular focus on the use of design models and verification techniques at runtime to assurance the goals of computing systems. He received a PhD from for the KU Leuven in 2006. Recently he authored the book “An Introduction to Self-adaptive Systems: A Contemporary Software Engineering Perspective” that was published by Wiley.
Xin Yao: Ensemble Approaches to Class Imbalance Learning
Many real world classification problems have highly imbalanced and skew data distributions. In fault diagnosis and condition monitoring for example, there are ample data for the normal class, yet data for faults are always very limited and costly to obtain. It is often a challenge to increase the performance of a classifier on minority classes without sacrificing the performance on majority classes. This talk discusses some of the techniques and algorithms that have been developed for class imbalance learning, especially through ensemble learning.
First, the motivations behind ensemble learning are introduced and the importance of diversity highlighted.
Second, some of the challenges of multi-class imbalance learning and potential solutions are presented. What might have worked well for the binary case do not work for multiple classes anymore, especially when the number of classes increases.
Third, online class imbalance learning will be discussed, which can be seen as a combination of online learning and class imbalance learning. Online class imbalance learning poses new research challenges that still have not been well understood., let alone solved, especially for imbalanced data streams with concept drift.
Fourth, the natural fit of multi-objective learning to class imbalance learning is pointed out. The relationship between multi-objective learning and ensemble learning will be discussed. Finally, future research directions will be given.
Xin Yao is a Chair Professor of Computer Science at the Southern University of Science and Technology, Shenzhen, China, and a part-time professor at the University of Birmingham, UK. His major research interests include evolutionary computation, ensemble learning and search-based software engineering. He is an IEEE fellow, a former (2014-15) President of IEEE Computational Intelligence Society (CIS) and a former (20003-08) Editor-in-Chief of IEEE Transactions on Evolutionary Computation. His work won the 2001 IEEE Donald G. Fink Prize Paper Award, 2010, 2016 and 2017 IEEE Transactions on Evolutionary Computation Outstanding Paper Awards, 2010 BT Gordon Radley Award for Best Author of Innovation (Finalist), 2011 IEEE Transactions on Neural Networks Outstanding Paper Award, and many other best paper awards. He received a Royal Society Wolfson Research Merit Award in 2012, the IEEE CIS Evolutionary Computation Pioneer Award in 2013, and the 2020 IEEE Frank Rosenblatt Award.
Gal Chechik: Reasoning About Perception
AI aims to build systems that interact with their environment, with people and with other agents in the real world. This vision poses hard algorithmic challenges: from generalizing effectively from little or no samples, through adapting to new domains to communicating in ways that are natural to people. I will discuss our recent research thrusts for facing these challenges. First, an approach to model the high-level structure of visual scene. Second, leveraging compositional structures in attribute space to learn from descriptions without any visual samples. Finally, an approach where agents learn new concepts without labels, by using elimination to reason about their environment. Joint work with colleagues at NVIDIA and Bar-Ilan University.
Gal Chechik, is a director of AI research at NVIDIA and a Professor at Bar-Ilan University, Israel. His research spans algorithms, theory and applications of deep learning, with a focus on strong generalization: few-shot and zero-shot learning, and adaptation to novel domains for example in personalized federated learning. A particular interest is in perception, action, and reasoning (PAR) and their intersection for the purpose of smarter generalization.
Gal earned his PhD in 2004 from the Hebrew University developing machine learning methods to study neural coding. He then worked at Stanford CS department with D. Koller, studying computational principles regulating molecular pathways. In 2007 he joined Google research as a senior research scientist, developing large scale machine learning algorithm for machine perception. Since 2009, he heads the learning systems lab at the Gonda center of Bar Ilan university, and was appointed an full professor in 2019. Gal is the author of ~100 refereed publications and ~40 patents, including publications in Nature Biotechnology, Cell and PNAS. His work won best-paper awards in NeurIPS and ICML.
Mathias Grundmann: On-device ML solutions for Mobile and Web
In this talk, I will present several on-device Machine Learning (ML) solutions for mobile and web that are powering a wide range of impactful Google Products. On-device ML has major benefits enabling low-latency, offline and privacy-preserving approaches. However, to ship these solutions in production, we need to overcome substantial technical challenges to deliver on-device ML in real-time and with low-latency. Once solved, our solutions power applications like background replacement and light adjustment in Google Meet, AR effects in YouTube and Duo, gesture controls of devices and view-finder tracking for Google Lens and Translate.
In this talk, I will cover some of the core-recipes behind Google’s on-device ML solutions, from model design over enabling ML solutions infrastructure (MediaPipe) to on-device ML inference acceleration. In particular we will be covering video segmentation, face meshes and iris tracking, hand tracking for gesture control and body tracking to power 3D avatars. The covered solutions are also available to the research and developer community via MediaPipe, —an open source cross platform framework for building customizable ML pipelines for mobile, web, desktop and python.
Matthias Grundmann is a Director of Research at Google leading a team of ~40 Applied ML and Software Engineers with focus on on-device Machine Learning solutions. His team develops high-quality, cross-platform ML solutions (MediaPipe) powered by cutting-edge, accelerated ML inference for mobile and web.
UNINETT Sigma2 has the responsibility for and operates the national e-infrastructure for large-scale data- and computational science in Norway. We offer services in High-Performance Computing (HPC) and storage of scientific data. Through the e-infrastructure, Norwegian researchers and research institutions gain access to some of the world's most powerful computers. All research areas with a need for high-capacity computations and large scale data storage can apply for resources on the e-infrastructure.
Today, Sigma2 has users that span from climate and marine research to language, energy and health. For instance, The Norwegian Institute of Public Health (NIPH), uses Sigma2's services for calculating virus spread and anticipated vaccine effect in connection with covid-19.
Sigma2 also provides services to industry through the competence centre collaboration with NORCE and SINTEF, and coordinates Norway’s participation in international collaborations.
Sigma2 activities are jointly financed by the Research Council of Norway (RCN) and the Sigma2 consortium partners, which are the universities in Oslo, Bergen, Trondheim and Tromsø. This collaboration goes by the name NRIS – Norwegian Research Infrastructure Services. The business is run non-profit. Sigma2 is a subsidiary of Uninett AS with its head office in Trondheim
Roger Kvam, UNINETT Sigma 2. Senior project manager, leader National competence center for HPC for industry. Experienced from international IT management in Asia, Europe and US. HPC architecture and operation for semiconductor engineering. IT management and HPC for oil & gas exploration.
Stefano Soatto: Learning Representations
Representations are functions of the data that are "useful" for a task. Of all functions, one wishes to design or learn those that contain all the "information" in the data, and none of the variability that is irrelevant to the task. Depending on how one defines and measures "useful" and "information", different notions of representations can be instantiated. What are the relationships among those? Are there common principles behind the different tools and models? Is there a common notion of "optimality" that emerges from all formalisms? If so, are such optimal representations computable? If not, can they be approximated? If such representations are learned using "past data" (training set), can we predict how well they will perform on "future data" (test set)?
These questions have nothing to do with Deep Learning, but understanding them sets the stage for the second part of the lecture. In Deep Learning, we are given a training set, and we minimize a loss function that, at least at face value, knows nothing about "future data". Just like the activations of a network in response to a test datum can be understood as a representation of future data, the parameters (weights) of a network can be understood as a representation of the past training set. What properties should the weights exhibit that can be optimized during training, which ensure that desirable properties of the activations emerge? Is there something special in deep neural networks that addresses this issue of generalization? Do these properties translate to a variational principle? Does this principle have anything to do with optimality of representations? Can they be imbued into the optimization we use to train deep networks?
In this lecture we will derive a theory of representation that is the first to address these questions for deep learning. The question the theory answers is: "What are the functions of given (past) data one can compute, so that the resulting representation of future data is best for the task at hand?" What it does not address is what happens when the task is not completely specified beforehand. Furthermore, we will dive deep into how such representations can be computed in practice, and what to do when the task is not specified at the outset.
Prof. Stefano Soatto, Professor, UCLA, Director of Applied Science, Amazon AI. He is a professor of computer science with notable works in Computer Vision and Nonlinear Estimation and Control Theory (vision, sound, touch) to interact with humans and the environment.
Fridtjof Stein: Looking Far Ahead … Perception Challenges in the Field of Autonomous Trucking
Trucks are special in several aspects. Therefore it is only partially possible to transfer an existing sensor set of a robo taxi to an autonomous truck. In this talk I will focus on the specific sensor challenges in the context of the different modalities. I will elaborate on hardware- and software topics.
Dr. Fridtjof Stein is a senior scientist at Daimler truck within the field of perception. He works for about three decades at Daimler in the field of autonomous driving in public traffic including real-time vision especially in the fields of stereo vision, optical flow, object detection, and ground modeling in the automotive domain.
Business talks: Success Stories – How did AI Shape Your Business?
Introduction by chair Anne Grete Ellingsen, project manager national European Digital innovation Hub candidate. Short presentation of the EUs program on investment in digital infrastructure and the benefit for SMEs and start ups
Crayon presented by Geir Gulliksen
Intelecy presented by Espen Davidsen
Aquabyte presented by Trude Jansen Hagland
Rocketfarm presented by Hallvard Haugen
Idean presented by Lars Petter Aase
Florian Berger: Crayon (crayon.com)
Florian Berger is the VP for Data & AI Services at Crayon, leading delivery centers around the globe. His passion is in transforming organizations and developing next-generation businesses. Before joining Crayon, he worked in several innovation and transformation roles worldwide, bringing ideas to commercialization. Using his talent of identifying successful innovations, he also spent time in CVC investing in cutting-edge technologies.
Espen Davidsen: Intelecy (intelecy.com)
Intelecy is cloud-based software for predictive maintenance and process optimization, explicitly designed for Industrial IoT data. Never before have AI and machine learning technology being more accessible and easy to use. Intelecy quickly integrates with the most common industrial protocols and DCS and SCADA software. With unique and automated data cleaning, Intelecy automatically handles outliers and "bad-quality" data common to industrial process data. Understand what has happened, what is happening and what will happen unless corrective actions are taken. Turn your data into valuable insights, and let those insights make your processes cleaner, safer and more efficient.
Espen Davidsen is COO and Co-Founder at Intelecy. Intelecy is determined to make industrial production cleaner, safer and more effective. Espen Davidsen is both an engineer and a strategic leader with more than 20 years of experience building technology and companies. He works best in the intersection between strategy, technology and operations. As the COO at Intelecy his focus is to build a strong team and ensure rapid international growth.He holds an EMBA in Strategic Management from Norwegian School of Economics (NHH) and a Master in Computing Science from UK.
Trude Jansen Hagland: Aquabyte (aquabyte.no)
Aquaculture is one of the world’s fastest growing food production sectors and offers a meaningful solution to the global food deficit. Our social mission is to use software technology to make food production more efficient, sustainable, and profitable. Bryton Shang started Aquabyte with the idea of applying the most recent technology in machine learning to engage in more efficient fish farming. Three years later, Aquabyte has grown from a prototype in a bathtub in Silicon Valley, San Francisco, to become a software company that makes work easier for fish farms along the Norwegian coast, every single day.
We combine the best of innovation culture in Silicon Valley with the foremost expertise from Norwegian fish farming. This is why we are represented in both locations, not in the form of a main office and a subsidiary, but as two equally important working environments that are in continuous dialogue with one another. It is precisely the combination of these two environments, technology and aquaculture, that is so powerful.
Machine learning is a form of artificial intelligence. It is a software, a network of algorithms, that can be trained to recognise and interpret patterns in images and data. The technology behind it is the same as that used in face recognition on Facebook, for interpreting the road in self-driving cars – or to recommend films and music on Spotify and Netflix based on previous preferences.
Trude Jansen Hagland is the Head of Product at Aquabyte Global where she leads the product development. She has a master degree in fish health biology from the University of Bergen, and she has extensive and broad experience from the aquaculture industry. Before joining Aquaqbyte Trude was the Technology Application Lead at Cargill, responsible for development of new products and solutions for the aquaculture industry. She has also worked for the Norwegian Food Safety Authorities and Intervet Norbio (currently MSD).
Hallvard Haugen: Rocketfarm (rocketfarm.no)
We are a relatively small, but fast growing company consisting of about 22 employees, based at Rockethouse in Sogndal on the west coast of Norway. Here we share offices with our sister company N-Link, making us a small but strong hub of robotics and technology.
Our most known product is a software for cobots from Universal Robots - PALLY - which makes complex palletizing easy, but for the last three years we have also been working with AI. In AI, we first and foremost develop and deliver tailored Machine Vision Systems for industry clients, running AI at its core.
Hallvard Haugen has since 2019 worked at Rocketfarm as a software developer, and the focus has mostly been on deep learningbased vision solutions for industry customers. His background is M. Sc. degree in Applied Physics and Mathematics from the University of Tromsø, and systems engineering for Kongsberg Satellite Services.
The scientific committee
Nabil Belbachir, NORCE , Norway
René Vidal, Johns Hopkins University, USA
Thomas Bäck, Leiden University and NORCE
Marius Leordeanu, Politehnica University of Bucharest