Training camps

Training camps 2020 - 2021

 

*** Training Camp by Unicredit ***
Date: June 30th - July 2nd, 2021
Title: Knowledge Graph Completion


Abstract
A Knowledge Graph is a powerful semantic model to represent complex relations between real-world entities such as news, people. Thanks to they ability of modeling large-scale data from a vast amount of heterogeneous sources, knowledge graphs are nowadays the model of choice for many different application domains that have to deal with the challenge of extracting and structuring knowledge on millions of entities and relations.
Modern enterprise knowledge graphs are built by ingesting vast amounts of data.
However, despite these efforts, many entities and relations are not properly captured in the resulting model. In order to complete such missing information, the task of Knowledge Graph Completion has been studied by researchers and practitioners.
The aim of this course is to overview the main state-of-the-art techniques for the tasks of link prediction and knowledge graph completion. More specifically, the student will learn:

  *   Link prediction using Machine Learning
  *   Knowledge graph embedding
  *   Rule mining for knowledge graph
We will also invite the students to participate in an online Kaggle competition on real-world knowledge bases.

The course will assume that the students have solid knowledge of Python and machine learning scikit-learn library. Knowledge of neural network libraries such as TensorFlow or PyTorch and graph manipulation library NetworkX is recommended, but not strictly required. We will provide self-study references for students to catch-up on these libraries.



*** Training Camp by Mykhaylo AndriIuka, Google Research ***
Date: September 2nd - 4th, 2021


Title: Building an image search engine

Abstract:
How could we build an automated system that can find a photograph in a family album or an online photo collection given just a textual description? In this course we will cover fundamental techniques in computer vision and natural language processing that will help us to address this question. The main aim of the course will be to enable students to build their own prototype of the image search engine, and participate in online Kaggle competition organized for the course participants. To aid the students in this mission we will review common methods for representing images and text with neural networks.

Specific techniques that the students will learn:
- image representation with convolutional neural networks (CNNs)
- building recurrent neural networks with LSTM and GRU units
- generating natural language image descriptions (=image captioning)
- representing words and sentences with vector embeddings (Word2Vec, GloVe, and BERT)

The course will assume that the students have solid knowledge of Python and numerical computation package NumPy. Knowledge of neural network libraries such as TensorFlow and PyTorch is highly recommended, but not strictly required. We will provide self-study materials for students to catch-up on these libraries.