Artificial Intelligence in Progress

Home   Artificial Intell.   Bio-exact   Chemistry   Computer Sci. BSc   Earth Sciences   Mathematics   Physics & Astr.   Science Educ.   Submit   Webmaster        
bachelors   masters   all  

Displaying theses 1-10 of 508 total
Previous  1  2  3  4  5  6  7  8  9  10  11  Next  Search:

J.H. Winkens
Master programme: Artificial Intelligence February 14th, 2019
Institute: Informatics Institute Research group: Amsterdam Machine Learning Lab Graduation thesis Supervisor: Geert Litjens
Out-of-distribution detection for computational pathology with multi-head ensembles
Distribution shift is a common phenomenon in real-life safety-critical situations that is detrimental to the performance of current deep learning models. Constructing a principled method to detect such a shift is critical to building safe and predictable automated image analysis pipelines for medical imaging. In this work, we interpret the problem of out-of-distribution detection for computational pathology in an epistemic uncertainty estimation setting. Given the difficulty of obtaining a sufficiently multi-modal predictive distribution for uncertainty estimation, we present a multiple heads topology in CNNs as a highly diverse ensembling method. We empirically prove that the method exhibits greater representational diversity than various popular ensembling methods, such as MC dropout and Deep Ensembles. The fast gradient sign method is repurposed and we show that it separates the softmax scores of in-distribution samples and out-of-distribution samples. We identify the challenges for this task in the domain of computational pathology and extensively demonstrate the effectiveness of the proposed method on two clinically relevant tasks in this field.
picture that illustrates the research done
Scientific abstract (pdf 1K)   Full text (pdf 12355K)

T.F.A. van der Ouderaa
Master programme: Artificial Intelligence January 31st, 2019
Institute: UvA / Other Research group: UvA / Other Graduation thesis Supervisor: Daniel E. Worrall
photo of the author
Reversible Networks for Memory-efficient Image-to-Image Translation in 3D Medical Imaging
The Pix2pix and CycleGAN losses have vastly improved the qualitative and quantitative visual quality of results in image-to-image translation tasks. We extend this framework by exploring approximately invertible architectures which are well suited to these losses. These architectures are approximately invertible by design and thus partially satisfy cycle-consistency before training even begins. Furthermore, since invertible architectures have constant memory complexity in depth, these models can be built arbitrarily deep. We are able to demonstrate superior quantitative output on the Cityscapes and Maps datasets. Additionally, we show that the model allows us to perform several memory-intensive medical imaging tasks, including a super-resolution problem on 3D MRI brain volumes. We also demonstrate that our model can perform a 3D domain-adaptation and 3D super-resolution task on chest CT volumes. By doing this, we provide a proof-of-principle for using reversible networks to create a model capable of pre-processing 3D CT scans to high resolution with a standardized appearance.
picture that illustrates the research done
Scientific abstract (pdf 1K)   Full text (pdf 19227K)

R.J.P. Bakker
Master programme: Artificial Intelligence December 19th, 2018
Institute: UvA / Other Research group: UvA / Other Graduation thesis Supervisor: Maarten Marx
photo of the author
Evolving Regular Expression Features for Text Classification with Genetic Programming
Text classification algorithms often rely on vocabulary counters like bag-of-words or character n-grams to represent text as a vector appropriate for use in machine learning algorithms. In this work, automatically generated regular expressions are proposed as an alternative feature set. The proposed algorithm uses genetic programming to evolve a set of regular expression features based on labeled text data and train a classifier in an end-to-end fashion. Though a comparison of the generated features and traditional text features indicates a classifier using generated features is not able to make better predictions, the generated features are able to capture patterns that cannot be found with the traditional features. As a result, a classifier combining traditional methods with generated features is able to improve significantly.
picture that illustrates the research done
Scientific abstract (pdf 1K)   Full text (pdf 2686K)

J.H.J. Linmans
Master programme: Artificial Intelligence December 13th, 2018
Institute: Informatics Institute Research group: Amsterdam Machine Learning Lab Graduation thesis Supervisor: Rianne van den Berg
Introspective Generative Modeling for Multimodal Translation
Developments on generative modeling have resulted in many successful models and related learning algorithms. However they have been less successful compared to models applied to discriminative tasks like classification. In general, generative modeling has been considered to be a much harder task compared to discriminative learning. However, not too many generative models sufficiently explore the representational power of discriminative modeling. In this thesis we study how to apply a new type of generative model, solely based on discriminative neural networks and apply it to tasks of domain translation between images and text. We show that the proposed model is able to generate high quality images based on an image description and is furthermore able to transform existing images based on a change in the corresponding caption.
picture that illustrates the research done
Scientific abstract (pdf 1K)   For more info or full text, mail to:

K.W. Korrel
Master programme: Artificial Intelligence November 26th, 2018
Institute: ILLC Research group: Language and Computation Graduation thesis Supervisor: Dieuwke Hupkes
From Sequence to Attention; Search for a Compositional Bias in Sequence-to-Sequence Models
Although sequence-to-sequence models have successfully been applied to many tasks, they are shown to have poor compositional skills. The principle of compositionality states that the meaning of a complex expression is a function only of its constituents and the manner in which they are combined. When a model would thus have an understanding of the individual constituents and can combine them in novels ways, this would allow for efficient learning and generalization. We first develop Attentive Guidance to show that guiding a sequence-to-sequence model in its attention modeling can help it find disentangled representations of the input symbols and to process them individually. Later we develop the sequence-to-attention architecture, a new model for sequence-to-sequence tasks with more emphasis on sparse attention modeling. We show that this architecture can find similar compositional solutions as can be developed with Attentive Guidance, without requiring attention annotations in the training data.
picture that illustrates the research done
Scientific abstract (pdf 2K)   Full text (pdf 1450K)

L.J. Pascha
Master programme: Artificial Intelligence November 26th, 2018
Institute: Informatics Institute Research group: QUVA Lab Graduation thesis Supervisor: Efstratios Gavves
Improving Word Embeddings for Zero-Shot Event Localisation by Combining Relational Knowledge with Distributional Semantics
Temporal event localisation of natural language text queries is a novel task in computer vision. Thus far, no consensus has been reached on how to predict the temporal boundaries of action segments precisely. While most attention in literature has been dedicated towards the representation of vision, here we attempt to improve the representation of language for event localisation by applying Graph Convolutions (GraphSAGE) on ConceptNet with distributional node embedding features. We argue that due to the large vocabulary size of language and currently small temporally sentence annotated datasets in scale and size, a high dependency is placed upon zero-shot performance. We hypothesise that our approach leads to more visually centred and structured language embeddings beneficial for this task. To test this, we design a wide-scale zero-shot dataset based on ImageNet to optimise our embeddings on and compare to other language embedding methods. State-of-the-art results are obtained on 5/17 popular intrinsic evaluation benchmarks, but with slightly lower performance on the TACoS dataset. Due to the almost complete overlap in train- and testset vocabulary, we deem additional testing necessary on a dataset that places more emphasis on word-relatedness; hypernyms, hyponyms and synonyms, which arguably makes language representation learning difficult.
picture that illustrates the research done
Scientific abstract (pdf 1K)   Full text (pdf 8953K)

T.E. Koenen
Master programme: Artificial Intelligence November 22nd, 2018
Institute: VU / Other Research group: Knowledge Representation and Reasoning Graduation thesis Supervisor: Peter Bloem
Text Generation and Annotation with Joint Multimodal Variational Autoencoders
Joint Multimodal Variational autoencoders are here applied to annotated text data and used to create a generative model over the two separate domains as well as allowing for cross domain mapping, by encoding mono-modal data and decoding the latent variable with the joint decoder.
picture that illustrates the research done
Scientific abstract (pdf 1K)   Full text (pdf 1784K)

J. Köhler
Master programme: Artificial Intelligence November 21st, 2018
Institute: Other Research group: Max Planck Institute for Intelligent Systems / EI Graduation thesis Supervisor: Efstratios Gavves
photo of the author
Differentially private data release by optimal compression
In this thesis we study how the principle of compressing data sets in an optimal way according to a measure of utility yields a mechanism that can be used to hide individuals within the set while still maintaining statistical usefulness for analysis tasks. As the problem of optimal compression is inherently difficult, we study two tractable instances that allow analytic sampling and analysis. Both approaches are further evaluated for their usefulness in practical applications. Finally, we sketch, how this work could be extended to less constrained assumptions on data set or utility measures.
picture that illustrates the research done
Scientific abstract (pdf 1K)   Full text (pdf 794K)

S.G.J. Bouwmeester
Master programme: Artificial Intelligence October 12th, 2018
Institute: ILLC Research group: Dialogue Modelling Group Graduation thesis Supervisor: Raquel Fernandez
photo of the author
Analysing Seq-to-seq Models in Goal-oriented Dialogue: Generalising to Disfluencies.
Data-driven dialogue systems are still far from understanding natural dialogue. Several aspects of natural language make it hard to capture in a system, such as unpredictability, mistakes and the width of the domain. In this thesis we take a step towards more natural data by examining disfluencies (i.e. mistakes). We test sequence to sequence models with attention on goal-oriented dialogue. Sequence to sequence models were chosen to overcome the unknown aspect of the mistakes, since they are known for their ability to generalise to unseen examples. The models are tested on disfluent dialogue data, the bAbI+ task, in addition to normal goal-oriented dialogue data, the bAbI task. In contrast to previous findings with memory networks, we find that the sequence to sequence model performs both the bAbI tasks as the bAbI+ task well achieving near perfect scores on both tasks. A slight decrease in performance is noticed when introducing disfluencies only to test data, only 80% accuracy is measured in this condition. This is surprising because memory networks are very similar to sequence to sequence models with attention.
picture that illustrates the research done
Scientific abstract (pdf 2K)   Full text (pdf 2316K)

F. Ambrogi
Master programme: Artificial Intelligence September 28th, 2018
Institute: Informatics Institute Research group: Computer Vision Graduation thesis Supervisor: Arnoud Visser
Evolving a Spiking Neural Network controller for low gravity environments
Development of an efficient and quickly trainable SNN controller for a legged rover, to operate in low gravity conditions. Several Evolutionary Algorithms were used to optimize the controller. Simulations are performed on MuJoCo, with the OpenAI Gym interface. The architecture is tested on some of its benchmarks, and then run on the low-g environment purposely created.
picture that illustrates the research done
Scientific abstract (pdf 2K)   Full text (zip 14843K)

Previous  1  2  3  4  5  6  7  8  9  10  11  Next  

This page is maintained by