Invited Talks (2021 edition)

Opening Ceremony

[09:5010:00] Opening the Workshop
Prof. Daniel David (Rector of Babeș-Bolyai University)

Meteorological Research

[10:0010:30] Technology in weather forecasting (presentation, video)
Eugen Mihuleț (National Meteorological Administration, Romania)

In this year’s Global Risk Report issued by the World Economic Forum, extreme weather events continue to be the number one global risk by likelihood, while climate action failure is the most impactful and second most likely long-term risk. While these catastrophic indicators are common topics in recent years, weather and climate have always represented some of the most important factors for life, providing the conditions for thriving, or by contrast, for extinction. Thus, humans’ interest in weather is evident, and in the past century the science of meteorology made immense progress, from empirical approach to a discipline defined by hard science and high-tech. This presentation will give an overview on some of the technologies currently employed in meteorology, with emphasis on computer science and information technology, ending with a brief discussion on the future trends and opportunities in the field.


[10:3011:00] Weather radars – basic principles and application in nowcasting in Romania (presentation, video)
Sorin Burcea (National Meteorological Administration, Romania)

The weather radar is a weather sensor capable of determining the location, motion and intensity of precipitating clouds, making it one of the main technologies employed by National Meteorological Services around the world. Weather radar technology provides near real-time, high spatio-temporal resolution measurements of the atmospheric cloud systems, covering an area of a few hundred kilometers around each weather radar site. Hence, weather radars are also used for detecting convective storms and their associated severe weather phenomena, such as heavy rain, hail and strong winds. This presentation will introduce the basic principles of weather radars (i.e., operation, data collection), the information these provide to the meteorologists, limitations and future developments. Basic aspects of radar data interpretation, which is a key component of the nowcasting process and severe weather warnings issue, will be presented as well, including exemplification on past severe storms detected by the Romanian weather radars.


[11:0011:30] Nowcasting on Yr – opportunities and challenges (presentation, video)
Ivar Seierstad & Thomas Nipen (from Norwegian Meteorological Institute)

The Yr weather app is MET Norway’s main channel for disseminating weather forecasts to the public. Yr provides forecasts on timescales ranging from nowcasting to 10 day forecasts and has undergone a decade of development based on feedback from its large user base. Yr’s 90-minute precipitation nowcast uses an optical flow algorithm to detect the motion of precipitation in radar images and advects the precipitation forward in time. The advected fields are further post-processed to make the nowcast suitable as a point time series. We present challenges with delivering a robust operational service using radar data in complex Norwegian terrain. We also show ongoing work with improving the nowcast by integrating other data sources such as precipitation measurements from Netatmo’s dense network of personal weather stations.


[11:3012:00] A brief introduction to the netCDF format and THREDDS data server (presentation, video)
Arild Burud (from Norwegian Meteorological Institute)

The presentation will give the audience an understanding on how weather data – from observations to forecasts – can be shared internationally by use of the netCDF format and TDS. Topics include standardisation of metadata through the CF Convention and ACDD, and how OPeNDAP and THREDDS can be used to both expose large datasets and at the same time allow users to extract only what they need. The data extraction tool FIMEX is used as an example of how this can be simplified for non-expert users.


Deep Learning for Weather Nowcasting

[12:3013:00] Computational models for nowcasting (presentation, video)
Prof. Istvan Czibula, (Babeș-Bolyai University)

As the number and intensity of severe meteorological phenomena increases, predicting them in due time to avoid disasters becomes highly demanding for meteorologists. Deep learning approaches are known to offer good performance if  high volume of training data is available. Unlike traditional neural networks, deep networks are scalable, the performance of the model is likely to improve if more data and bigger models are deployed. This presentation will discuss computational approaches for applying deep learning models for the problem of short time weather predictions. The aim is to provide a high level overview regarding available meteorological data, issues related to using the meteorological data (especially radar data) from a computational viewpoint and various options for modelling weather nowcasting as a deep learning problem.


[13:0013:30] Supervised and unsupervised machine learning for nowcasting, applied on radar data from central Transylvania region (presentation, video)
Andrei Mihai, PhD.,  (Babeș-Bolyai University)

During our research we have studied multiple different machine learning models for weather nowcasting in order to validate them we needed real meteorological data, so we received a radar dataset from the central Transylvania region from the Romanian National Meteorology Administration (ANM). During this presentation I will show you how we used Self Organizing Maps (SOMs), which are a kind of unsupervised neural networks, to extract relevant information from the data; before trying to create a supervised model. Using SOMs we empirically showed that radar data changes very slowly over time, with the exception of few significant moments related to severe weather phenomena; and also that similar neighbourhoods of data in one moment lead to similar values in the next time moment, leading us to conclude that there are some patterns in  the data that can be learned, so that it would be possible to use the data in the neighbourhood of a location to predict the value at that location in the next time moment. With this information in mind, we developed 2 supervised models, that predict the value of one location based on the neighbourhood of that location at the previous time moment: NowDeepN, based on deep neural networks, and RadRAR, based on Relational Association Rules mining (extracting rules from data and predicting based on those rules). Finally, because of scaling and performance concerns we thought differently about the problem: what if we could predict directly all the data from one moment (on the entire area, not just one location) using the entire data at the previous moment? To this end, we developed the XNow model, based on convolutional neural networks.


[13:30 – 14:00] Deep learning models for composite reflectivity prediction (presentation, video)
Alexandra Albu, PhD., (Babeș-Bolyai University)

Weather nowcasting represents a challenging problem with major practical relevance. One of the limitations of currently used numerical weather prediction algorithms is the fact that they require a long time for outputting accurate predictions, which makes them difficult to use in real-time nowcasting systems. From this perspective, deep learning algorithms trained to estimate certain radar measurements given the past values of those measurements can provide a more efficient solution. Recently, various deep learning methods have been proposed for tackling the nowcasting problem. A common downside of these approaches consists in blurry predictions and an underestimation of measurements corresponding to severe weather events, caused by the difficulty of properly exploiting the information in highly imbalanced data sets. We present an analysis of the radar composite reflectivity product available on the MET Norway THREDDS server, alongside preliminary experiments conducted with the aim of overcoming the limitations of current deep learning methods. The experiments investigate the performance of deep convolutional encoder-decoder architectures and various loss functions in capturing the complex spatio-temporal dependencies underlying the data.


[14:00 – 14:30] Deep neural network models for nowcasting using satellite data (presentation, video)
Lect. PhD. Vlad Ionescu, (Babeș-Bolyai University)

Nowcasting, which means forecasting the weather for a short time period (such as 10-15 minutes from now), is an important problem for meteorologists and, due to the fact that satellite data consists of images, it is a suitable target for machine learning applications. The task is complex, due to the large volume of data that needs to be processed in a short timespan in order to accurately issue such nowcasting alerts. However, in recent years, advances in machine learning architectures and fast training algorithms have resulted in models that can make predictions from images with accuracy that exceeds even that of human experts in many fields. We will discuss such applications and present various models for this task, as well as possible future research directions. Our focus is going to be on our own DeePSat model, which is a convolutional neural network architecture for short-term satellite images prediction for the purpose of nowcasting. Comparisons with other approaches from the literature will also be analyzed and our future plans for this task will be present as well.


Applications of Deep Learning

[15:00 – 15:30] Enhancing the performance of indoor-outdoor image classifications using features extracted from depth-maps (presentation, video)
George Ciubotariu (Babeș-Bolyai University)

This presentation tackles a problem from Computer Vision, that of classifying indoors and outdoor images using Deep Learning models. To do so, we are going to perform an unsupervised learning based analysis with the aim of determining the relevance of depth maps in the context of classification. For further tests to decide on the granularity of information extraction means, features are aggregated from sub-images of different sizes from DIODE dataset to compare multiple scales of region attention. The results are then compared with other supervised methods so that the clear advantage of using depth information is confirmed. The broader goal of this research is to identify suitable networks for indoors-outdoor classification and, additionally, to emphasise the benefits of training Deep Learning models for dense visual tasks such as Depth Estimation on large data sets with a wide variety of scenes to boost their performance and test their potential to perform in any situation.


[15:30 – 16:00] Review and analysis of grayscale photography colorization using CNNs (presentation, video)
Alexandru-Marian Adăscăliței (Babeș-Bolyai University)

It is understood that, through the process of colorization, one aims at converting a grayscale image into one of color, usually because it was taken by the limited technology of previous decades. Although the surveyed methods can be applied to other fields, solely the content of photography is being considered. We curated some of the most promising papers, published between 2016 and 2021, providing balanced observations regarding software reliability, and deep learning approaches. Our contribution stands in the analysis of colorization in photography by examining used data sets and methodologies for evaluation, data processing activities, and the research directions previous work followed. When studying how convolutional blocks are included in a model, we are interested in both the internal organisation, studying traits among various architectures, and the direct implications of photography characteristics in those decisions, with information such as the histogram, textual resources describing the image context, or user provided hints. The challenge expands even after obtaining the results, with a literature composed of diverse evaluation metrics. For this reason we decided to evaluate state-of-the-art models on a data set we assembled, so that a comparison can be made.


[16:00 – 16:30] A machine learning approach for data protection in VR therapy applications (presentation, video)
Maria-Mădălina Mircea (Babeș-Bolyai University)

Health information is a protected asset that should be kept private. The tradability of personal data brings risks to the health information shared by users online. Authentication is a crucial first step when working to keep personal information private. Virtual Reality applications usually bypass application-specific authentication in favor of provider-specific authentication (i.e. Steam, etc). This approach is not ideal for health applications. Virtual Reality secure authentication can be difficult because of the lack of access to a keyboard. Previously proposed VR authentication systems use PINs, Patterns, or 3D object sequences. We propose a two-step authentication method. The first step requires the user to speak their chosen username and records their voice. A machine learning-based model determines if the recording is real or fake (i.e. liveness detection). The second step requires the user to perform a chosen dynamic movement. A machine learning-based model is employed to determine if the received movement matches the user’s previously chosen movement. The proposed method thus balances security with a better versatility than the one provided by traditional methods.


[16:30 – 17:00] Training data augmentation for reinforcement learning based trading algorithms using adversarial techniques (presentation, video)
Andrei Bratu (Babeș-Bolyai University)

Trading is an active topic in the discipline of economics. It consists of the buying and selling of various financial instruments, from stocks, futures, options to cryptocurrencies, with the purpose of turning a profit. Trading strategies are fundamentally split into fundamental strategies, where an actual person analyzes the environment in which the instrument evolves such as news, company quarterly reports etc. and technical strategies, where trends and fluctuations in the price are predicted using only the price. Algorithm-based technical strategies, deterministic or non-deterministic, have seen a massive influx in the 21st century, with domain experts approximating that 75% of trades are carried out algorithmically. The main goal of our research is to assert the viability of using synthetic generated training data to enhance the performance of reinforcement-learning based trading algorithms. Inspired by adversarial attacks, where input data is deliberately engineered to produce an error from a model, we use a generative adversarial network (GAN) type architecture to engineer difficult and credible scenarios to enhance the training of a model.


[17:00 – 17:30] DNA classification using supervised deep learning (presentation, video)
Iulia-Monica Szuhai (Babeș-Bolyai University)

Deoxyribonucleic acid or in short DNA, encodes the entire genetic information for all living organisms. Thus, the study of DNA gathered from archeological sites, might reveal insightful information regarding ancient life. Ancient DNA is subject to degradation and contamination due to natural or artificial causes, thus it is often difficult to discern between modern and ancient DNA. Therefore, the aim of our research is to analyse different machine learning methods for classifying ancient DNA from modern DNA. Our experiments will cover four different representations for the DNA. Two of the representations take into account relationships between close single or neighbouring molecules. One of them computes the frequencies of appearance of single, pairs and triplets of consecutive nucleotides while the other one uses a well-known text statistical measure, term frequency – inverse document frequency (TF-IDF), which computes how relevant pairs and triplets of consecutive nucleotides are in a collection of given DNA sequences. Another representation deals with a set of five physical and chemical properties of DNA, while the final representation one hot encodes each consecutive triplet in a DNA strand. Subsequently, we investigate supervised methods for discriminating between ancient and modern DNA sequences. We experiment with ANNs using the frequency representation, the TF-IDF representation and the physical and chemical properties representation. Further, we investigate the classification capability of a convolutional neural network that receives as input DNA sequences represented via one hot encoding. Both learning directions show encouraging results considering the experimental evaluation that has been performed on a large dataset containing more than 450.000 DNA sequences.